Dec 09 11:18:18 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 09 11:18:18 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 09 11:18:18 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 11:18:18 localhost kernel: BIOS-provided physical RAM map:
Dec 09 11:18:18 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 09 11:18:18 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 09 11:18:18 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 09 11:18:18 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 09 11:18:18 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 09 11:18:18 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 09 11:18:18 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 09 11:18:18 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 09 11:18:18 localhost kernel: NX (Execute Disable) protection: active
Dec 09 11:18:18 localhost kernel: APIC: Static calls initialized
Dec 09 11:18:18 localhost kernel: SMBIOS 2.8 present.
Dec 09 11:18:18 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 09 11:18:18 localhost kernel: Hypervisor detected: KVM
Dec 09 11:18:18 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 09 11:18:18 localhost kernel: kvm-clock: using sched offset of 5715783672 cycles
Dec 09 11:18:18 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 09 11:18:18 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 09 11:18:18 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 09 11:18:18 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 09 11:18:18 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 09 11:18:18 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 09 11:18:18 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 09 11:18:18 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 09 11:18:18 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 09 11:18:18 localhost kernel: Using GB pages for direct mapping
Dec 09 11:18:18 localhost kernel: RAMDISK: [mem 0x2e955000-0x334a2fff]
Dec 09 11:18:18 localhost kernel: ACPI: Early table checksum verification disabled
Dec 09 11:18:18 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 09 11:18:18 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 11:18:18 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 11:18:18 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 11:18:18 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 09 11:18:18 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 11:18:18 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 11:18:18 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 09 11:18:18 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 09 11:18:18 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 09 11:18:18 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 09 11:18:18 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 09 11:18:18 localhost kernel: No NUMA configuration found
Dec 09 11:18:18 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 09 11:18:18 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec 09 11:18:18 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 09 11:18:18 localhost kernel: Zone ranges:
Dec 09 11:18:18 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 09 11:18:18 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 09 11:18:18 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 09 11:18:18 localhost kernel:   Device   empty
Dec 09 11:18:18 localhost kernel: Movable zone start for each node
Dec 09 11:18:18 localhost kernel: Early memory node ranges
Dec 09 11:18:18 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 09 11:18:18 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 09 11:18:18 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 09 11:18:18 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 09 11:18:18 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 09 11:18:18 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 09 11:18:18 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 09 11:18:18 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 09 11:18:18 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 09 11:18:18 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 09 11:18:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 09 11:18:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 09 11:18:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 09 11:18:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 09 11:18:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 09 11:18:18 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 09 11:18:18 localhost kernel: TSC deadline timer available
Dec 09 11:18:18 localhost kernel: CPU topo: Max. logical packages:   8
Dec 09 11:18:18 localhost kernel: CPU topo: Max. logical dies:       8
Dec 09 11:18:18 localhost kernel: CPU topo: Max. dies per package:   1
Dec 09 11:18:18 localhost kernel: CPU topo: Max. threads per core:   1
Dec 09 11:18:18 localhost kernel: CPU topo: Num. cores per package:     1
Dec 09 11:18:18 localhost kernel: CPU topo: Num. threads per package:   1
Dec 09 11:18:18 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 09 11:18:18 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 09 11:18:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 09 11:18:18 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 09 11:18:18 localhost kernel: Booting paravirtualized kernel on KVM
Dec 09 11:18:18 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 09 11:18:18 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 09 11:18:18 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 09 11:18:18 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 09 11:18:18 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 09 11:18:18 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 09 11:18:18 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 11:18:18 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 09 11:18:18 localhost kernel: random: crng init done
Dec 09 11:18:18 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 09 11:18:18 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 09 11:18:18 localhost kernel: Fallback order for Node 0: 0 
Dec 09 11:18:18 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 09 11:18:18 localhost kernel: Policy zone: Normal
Dec 09 11:18:18 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 09 11:18:18 localhost kernel: software IO TLB: area num 8.
Dec 09 11:18:18 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 09 11:18:18 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 09 11:18:18 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 09 11:18:18 localhost kernel: Dynamic Preempt: voluntary
Dec 09 11:18:18 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 09 11:18:18 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 09 11:18:18 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 09 11:18:18 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 09 11:18:18 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 09 11:18:18 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 09 11:18:18 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 09 11:18:18 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 09 11:18:18 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 11:18:18 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 11:18:18 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 11:18:18 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 09 11:18:18 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 09 11:18:18 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 09 11:18:18 localhost kernel: Console: colour VGA+ 80x25
Dec 09 11:18:18 localhost kernel: printk: console [ttyS0] enabled
Dec 09 11:18:18 localhost kernel: ACPI: Core revision 20230331
Dec 09 11:18:18 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 09 11:18:18 localhost kernel: x2apic enabled
Dec 09 11:18:18 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 09 11:18:18 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 09 11:18:18 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 09 11:18:18 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 09 11:18:18 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 09 11:18:18 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 09 11:18:18 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 09 11:18:18 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 09 11:18:18 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 09 11:18:18 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 09 11:18:18 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 09 11:18:18 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 09 11:18:18 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 09 11:18:18 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 09 11:18:18 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 09 11:18:18 localhost kernel: x86/bugs: return thunk changed
Dec 09 11:18:18 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 09 11:18:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 09 11:18:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 09 11:18:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 09 11:18:18 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 09 11:18:18 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 09 11:18:18 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 09 11:18:18 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 09 11:18:18 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 09 11:18:18 localhost kernel: landlock: Up and running.
Dec 09 11:18:18 localhost kernel: Yama: becoming mindful.
Dec 09 11:18:18 localhost kernel: SELinux:  Initializing.
Dec 09 11:18:18 localhost kernel: LSM support for eBPF active
Dec 09 11:18:18 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 09 11:18:18 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 09 11:18:18 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 09 11:18:18 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 09 11:18:18 localhost kernel: ... version:                0
Dec 09 11:18:18 localhost kernel: ... bit width:              48
Dec 09 11:18:18 localhost kernel: ... generic registers:      6
Dec 09 11:18:18 localhost kernel: ... value mask:             0000ffffffffffff
Dec 09 11:18:18 localhost kernel: ... max period:             00007fffffffffff
Dec 09 11:18:18 localhost kernel: ... fixed-purpose events:   0
Dec 09 11:18:18 localhost kernel: ... event mask:             000000000000003f
Dec 09 11:18:18 localhost kernel: signal: max sigframe size: 1776
Dec 09 11:18:18 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 09 11:18:18 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 09 11:18:18 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 09 11:18:18 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 09 11:18:18 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 09 11:18:18 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 09 11:18:18 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 09 11:18:18 localhost kernel: node 0 deferred pages initialised in 70ms
Dec 09 11:18:18 localhost kernel: Memory: 7774608K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 607524K reserved, 0K cma-reserved)
Dec 09 11:18:18 localhost kernel: devtmpfs: initialized
Dec 09 11:18:18 localhost kernel: x86/mm: Memory block size: 128MB
Dec 09 11:18:18 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 09 11:18:18 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 09 11:18:18 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 09 11:18:18 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 09 11:18:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 09 11:18:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 09 11:18:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 09 11:18:18 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 09 11:18:18 localhost kernel: audit: type=2000 audit(1765279095.552:1): state=initialized audit_enabled=0 res=1
Dec 09 11:18:18 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 09 11:18:18 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 09 11:18:18 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 09 11:18:18 localhost kernel: cpuidle: using governor menu
Dec 09 11:18:18 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 09 11:18:18 localhost kernel: PCI: Using configuration type 1 for base access
Dec 09 11:18:18 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 09 11:18:18 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 09 11:18:18 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 09 11:18:18 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 09 11:18:18 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 09 11:18:18 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 09 11:18:18 localhost kernel: Demotion targets for Node 0: null
Dec 09 11:18:18 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 09 11:18:18 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 09 11:18:18 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 09 11:18:18 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 09 11:18:18 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 09 11:18:18 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 09 11:18:18 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 09 11:18:18 localhost kernel: ACPI: Interpreter enabled
Dec 09 11:18:18 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 09 11:18:18 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 09 11:18:18 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 09 11:18:18 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 09 11:18:18 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 09 11:18:18 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 09 11:18:18 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [3] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [4] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [5] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [6] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [7] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [8] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [9] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [10] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [11] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [12] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [13] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [14] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [15] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [16] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [17] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [18] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [19] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [20] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [21] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [22] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [23] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [24] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [25] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [26] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [27] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [28] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [29] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [30] registered
Dec 09 11:18:18 localhost kernel: acpiphp: Slot [31] registered
Dec 09 11:18:18 localhost kernel: PCI host bridge to bus 0000:00
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 09 11:18:18 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 09 11:18:18 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 09 11:18:18 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 09 11:18:18 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 09 11:18:18 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 09 11:18:18 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 09 11:18:18 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 09 11:18:18 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 09 11:18:18 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 09 11:18:18 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 09 11:18:18 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 09 11:18:18 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 09 11:18:18 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 09 11:18:18 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 09 11:18:18 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 09 11:18:18 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 09 11:18:18 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 09 11:18:18 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 09 11:18:18 localhost kernel: iommu: Default domain type: Translated
Dec 09 11:18:18 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 09 11:18:18 localhost kernel: SCSI subsystem initialized
Dec 09 11:18:18 localhost kernel: ACPI: bus type USB registered
Dec 09 11:18:18 localhost kernel: usbcore: registered new interface driver usbfs
Dec 09 11:18:18 localhost kernel: usbcore: registered new interface driver hub
Dec 09 11:18:18 localhost kernel: usbcore: registered new device driver usb
Dec 09 11:18:18 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 09 11:18:18 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 09 11:18:18 localhost kernel: PTP clock support registered
Dec 09 11:18:18 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 09 11:18:18 localhost kernel: NetLabel: Initializing
Dec 09 11:18:18 localhost kernel: NetLabel:  domain hash size = 128
Dec 09 11:18:18 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 09 11:18:18 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 09 11:18:18 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 09 11:18:18 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 09 11:18:18 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 09 11:18:18 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 09 11:18:18 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 09 11:18:18 localhost kernel: vgaarb: loaded
Dec 09 11:18:18 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 09 11:18:18 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 09 11:18:18 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 09 11:18:18 localhost kernel: pnp: PnP ACPI init
Dec 09 11:18:18 localhost kernel: pnp 00:03: [dma 2]
Dec 09 11:18:18 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 09 11:18:18 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 09 11:18:18 localhost kernel: NET: Registered PF_INET protocol family
Dec 09 11:18:18 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 09 11:18:18 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 09 11:18:18 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 09 11:18:18 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 09 11:18:18 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 09 11:18:18 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 09 11:18:18 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 09 11:18:18 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 09 11:18:18 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 09 11:18:18 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 09 11:18:18 localhost kernel: NET: Registered PF_XDP protocol family
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 09 11:18:18 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 09 11:18:18 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 09 11:18:18 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 09 11:18:18 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 76798 usecs
Dec 09 11:18:18 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 09 11:18:18 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 09 11:18:18 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 09 11:18:18 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 09 11:18:18 localhost kernel: ACPI: bus type thunderbolt registered
Dec 09 11:18:18 localhost kernel: Initialise system trusted keyrings
Dec 09 11:18:18 localhost kernel: Key type blacklist registered
Dec 09 11:18:18 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 09 11:18:18 localhost kernel: zbud: loaded
Dec 09 11:18:18 localhost kernel: integrity: Platform Keyring initialized
Dec 09 11:18:18 localhost kernel: integrity: Machine keyring initialized
Dec 09 11:18:18 localhost kernel: Freeing initrd memory: 77112K
Dec 09 11:18:18 localhost kernel: NET: Registered PF_ALG protocol family
Dec 09 11:18:18 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 09 11:18:18 localhost kernel: Key type asymmetric registered
Dec 09 11:18:18 localhost kernel: Asymmetric key parser 'x509' registered
Dec 09 11:18:18 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 09 11:18:18 localhost kernel: io scheduler mq-deadline registered
Dec 09 11:18:18 localhost kernel: io scheduler kyber registered
Dec 09 11:18:18 localhost kernel: io scheduler bfq registered
Dec 09 11:18:18 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 09 11:18:18 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 09 11:18:18 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 09 11:18:18 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 09 11:18:18 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 09 11:18:18 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 09 11:18:18 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 09 11:18:18 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 09 11:18:18 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 09 11:18:18 localhost kernel: Non-volatile memory driver v1.3
Dec 09 11:18:18 localhost kernel: rdac: device handler registered
Dec 09 11:18:18 localhost kernel: hp_sw: device handler registered
Dec 09 11:18:18 localhost kernel: emc: device handler registered
Dec 09 11:18:18 localhost kernel: alua: device handler registered
Dec 09 11:18:18 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 09 11:18:18 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 09 11:18:18 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 09 11:18:18 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 09 11:18:18 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 09 11:18:18 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 09 11:18:18 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 09 11:18:18 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 09 11:18:18 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 09 11:18:18 localhost kernel: hub 1-0:1.0: USB hub found
Dec 09 11:18:18 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 09 11:18:18 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 09 11:18:18 localhost kernel: usbserial: USB Serial support registered for generic
Dec 09 11:18:18 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 09 11:18:18 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 09 11:18:18 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 09 11:18:18 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 09 11:18:18 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 09 11:18:18 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 09 11:18:18 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 09 11:18:18 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-09T11:18:17 UTC (1765279097)
Dec 09 11:18:18 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 09 11:18:18 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 09 11:18:18 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 09 11:18:18 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 09 11:18:18 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 09 11:18:18 localhost kernel: usbcore: registered new interface driver usbhid
Dec 09 11:18:18 localhost kernel: usbhid: USB HID core driver
Dec 09 11:18:18 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 09 11:18:18 localhost kernel: Initializing XFRM netlink socket
Dec 09 11:18:18 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 09 11:18:18 localhost kernel: Segment Routing with IPv6
Dec 09 11:18:18 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 09 11:18:18 localhost kernel: mpls_gso: MPLS GSO support
Dec 09 11:18:18 localhost kernel: IPI shorthand broadcast: enabled
Dec 09 11:18:18 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 09 11:18:18 localhost kernel: AES CTR mode by8 optimization enabled
Dec 09 11:18:18 localhost kernel: sched_clock: Marking stable (2770004796, 468457963)->(3517471024, -279008265)
Dec 09 11:18:18 localhost kernel: registered taskstats version 1
Dec 09 11:18:18 localhost kernel: Loading compiled-in X.509 certificates
Dec 09 11:18:18 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 09 11:18:18 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 09 11:18:18 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 09 11:18:18 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 09 11:18:18 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 09 11:18:18 localhost kernel: Demotion targets for Node 0: null
Dec 09 11:18:18 localhost kernel: page_owner is disabled
Dec 09 11:18:18 localhost kernel: Key type .fscrypt registered
Dec 09 11:18:18 localhost kernel: Key type fscrypt-provisioning registered
Dec 09 11:18:18 localhost kernel: Key type big_key registered
Dec 09 11:18:18 localhost kernel: Key type encrypted registered
Dec 09 11:18:18 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 09 11:18:18 localhost kernel: Loading compiled-in module X.509 certificates
Dec 09 11:18:18 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 09 11:18:18 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 09 11:18:18 localhost kernel: ima: No architecture policies found
Dec 09 11:18:18 localhost kernel: evm: Initialising EVM extended attributes:
Dec 09 11:18:18 localhost kernel: evm: security.selinux
Dec 09 11:18:18 localhost kernel: evm: security.SMACK64 (disabled)
Dec 09 11:18:18 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 09 11:18:18 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 09 11:18:18 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 09 11:18:18 localhost kernel: evm: security.apparmor (disabled)
Dec 09 11:18:18 localhost kernel: evm: security.ima
Dec 09 11:18:18 localhost kernel: evm: security.capability
Dec 09 11:18:18 localhost kernel: evm: HMAC attrs: 0x1
Dec 09 11:18:18 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 09 11:18:18 localhost kernel: Running certificate verification RSA selftest
Dec 09 11:18:18 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 09 11:18:18 localhost kernel: Running certificate verification ECDSA selftest
Dec 09 11:18:18 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 09 11:18:18 localhost kernel: clk: Disabling unused clocks
Dec 09 11:18:18 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 09 11:18:18 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 09 11:18:18 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 09 11:18:18 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 09 11:18:18 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 09 11:18:18 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 09 11:18:18 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 09 11:18:18 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 09 11:18:18 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 09 11:18:18 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 09 11:18:18 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 09 11:18:18 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 09 11:18:18 localhost kernel: Run /init as init process
Dec 09 11:18:18 localhost kernel:   with arguments:
Dec 09 11:18:18 localhost kernel:     /init
Dec 09 11:18:18 localhost kernel:   with environment:
Dec 09 11:18:18 localhost kernel:     HOME=/
Dec 09 11:18:18 localhost kernel:     TERM=linux
Dec 09 11:18:18 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 09 11:18:18 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 09 11:18:18 localhost systemd[1]: Detected virtualization kvm.
Dec 09 11:18:18 localhost systemd[1]: Detected architecture x86-64.
Dec 09 11:18:18 localhost systemd[1]: Running in initrd.
Dec 09 11:18:18 localhost systemd[1]: No hostname configured, using default hostname.
Dec 09 11:18:18 localhost systemd[1]: Hostname set to <localhost>.
Dec 09 11:18:18 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 09 11:18:18 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 09 11:18:18 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 09 11:18:18 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 09 11:18:18 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 09 11:18:18 localhost systemd[1]: Reached target Local File Systems.
Dec 09 11:18:18 localhost systemd[1]: Reached target Path Units.
Dec 09 11:18:18 localhost systemd[1]: Reached target Slice Units.
Dec 09 11:18:18 localhost systemd[1]: Reached target Swaps.
Dec 09 11:18:18 localhost systemd[1]: Reached target Timer Units.
Dec 09 11:18:18 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 09 11:18:18 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 09 11:18:18 localhost systemd[1]: Listening on Journal Socket.
Dec 09 11:18:18 localhost systemd[1]: Listening on udev Control Socket.
Dec 09 11:18:18 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 09 11:18:18 localhost systemd[1]: Reached target Socket Units.
Dec 09 11:18:18 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 09 11:18:18 localhost systemd[1]: Starting Journal Service...
Dec 09 11:18:18 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 09 11:18:18 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 09 11:18:18 localhost systemd[1]: Starting Create System Users...
Dec 09 11:18:18 localhost systemd[1]: Starting Setup Virtual Console...
Dec 09 11:18:18 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 09 11:18:18 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 09 11:18:18 localhost systemd[1]: Finished Create System Users.
Dec 09 11:18:18 localhost systemd-journald[309]: Journal started
Dec 09 11:18:18 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/2b8a60e1a15b4d5eb36fb3f643ae1f29) is 8.0M, max 153.6M, 145.6M free.
Dec 09 11:18:18 localhost systemd-sysusers[314]: Creating group 'users' with GID 100.
Dec 09 11:18:18 localhost systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Dec 09 11:18:18 localhost systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 09 11:18:18 localhost systemd[1]: Started Journal Service.
Dec 09 11:18:18 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 09 11:18:18 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 09 11:18:18 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 09 11:18:18 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 09 11:18:18 localhost systemd[1]: Finished Setup Virtual Console.
Dec 09 11:18:18 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 09 11:18:18 localhost systemd[1]: Starting dracut cmdline hook...
Dec 09 11:18:18 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Dec 09 11:18:18 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 11:18:18 localhost systemd[1]: Finished dracut cmdline hook.
Dec 09 11:18:18 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 09 11:18:18 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 09 11:18:18 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 09 11:18:18 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 09 11:18:18 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 09 11:18:18 localhost kernel: RPC: Registered udp transport module.
Dec 09 11:18:18 localhost kernel: RPC: Registered tcp transport module.
Dec 09 11:18:18 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 09 11:18:18 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 09 11:18:18 localhost rpc.statd[444]: Version 2.5.4 starting
Dec 09 11:18:18 localhost rpc.statd[444]: Initializing NSM state
Dec 09 11:18:18 localhost rpc.idmapd[449]: Setting log level to 0
Dec 09 11:18:18 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 09 11:18:18 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 09 11:18:19 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Dec 09 11:18:19 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 09 11:18:19 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 09 11:18:19 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 09 11:18:19 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 09 11:18:19 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 09 11:18:19 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 11:18:19 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 09 11:18:19 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 11:18:19 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 11:18:19 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 09 11:18:19 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 09 11:18:19 localhost systemd[1]: Reached target Network.
Dec 09 11:18:19 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 09 11:18:19 localhost systemd[1]: Starting dracut initqueue hook...
Dec 09 11:18:19 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 09 11:18:19 localhost systemd[1]: Reached target System Initialization.
Dec 09 11:18:19 localhost systemd[1]: Reached target Basic System.
Dec 09 11:18:19 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 09 11:18:19 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 09 11:18:19 localhost kernel:  vda: vda1
Dec 09 11:18:19 localhost kernel: libata version 3.00 loaded.
Dec 09 11:18:19 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 09 11:18:19 localhost kernel: scsi host0: ata_piix
Dec 09 11:18:19 localhost kernel: scsi host1: ata_piix
Dec 09 11:18:19 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 09 11:18:19 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 09 11:18:19 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 09 11:18:19 localhost systemd[1]: Reached target Initrd Root Device.
Dec 09 11:18:19 localhost kernel: ata1: found unknown device (class 0)
Dec 09 11:18:19 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 09 11:18:19 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 09 11:18:19 localhost systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 11:18:19 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 09 11:18:19 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 09 11:18:19 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 09 11:18:19 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 09 11:18:19 localhost systemd[1]: Finished dracut initqueue hook.
Dec 09 11:18:19 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 09 11:18:19 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 09 11:18:19 localhost systemd[1]: Reached target Remote File Systems.
Dec 09 11:18:19 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 09 11:18:19 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 09 11:18:19 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 09 11:18:19 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Dec 09 11:18:19 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 09 11:18:19 localhost systemd[1]: Mounting /sysroot...
Dec 09 11:18:20 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 09 11:18:20 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 09 11:18:20 localhost kernel: XFS (vda1): Ending clean mount
Dec 09 11:18:20 localhost systemd[1]: Mounted /sysroot.
Dec 09 11:18:20 localhost systemd[1]: Reached target Initrd Root File System.
Dec 09 11:18:20 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 09 11:18:20 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 09 11:18:20 localhost systemd[1]: Reached target Initrd File Systems.
Dec 09 11:18:20 localhost systemd[1]: Reached target Initrd Default Target.
Dec 09 11:18:20 localhost systemd[1]: Starting dracut mount hook...
Dec 09 11:18:20 localhost systemd[1]: Finished dracut mount hook.
Dec 09 11:18:20 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 09 11:18:20 localhost rpc.idmapd[449]: exiting on signal 15
Dec 09 11:18:20 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 09 11:18:20 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 09 11:18:20 localhost systemd[1]: Stopped target Network.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Timer Units.
Dec 09 11:18:20 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 09 11:18:20 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Basic System.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Path Units.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Remote File Systems.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Slice Units.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Socket Units.
Dec 09 11:18:20 localhost systemd[1]: Stopped target System Initialization.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Local File Systems.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Swaps.
Dec 09 11:18:20 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped dracut mount hook.
Dec 09 11:18:20 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 09 11:18:20 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 09 11:18:20 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 09 11:18:20 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 09 11:18:20 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 09 11:18:20 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 09 11:18:20 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 09 11:18:20 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 09 11:18:20 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 09 11:18:20 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 09 11:18:20 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 09 11:18:20 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 09 11:18:20 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Closed udev Control Socket.
Dec 09 11:18:20 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Closed udev Kernel Socket.
Dec 09 11:18:20 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 09 11:18:20 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 09 11:18:20 localhost systemd[1]: Starting Cleanup udev Database...
Dec 09 11:18:20 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 09 11:18:20 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 09 11:18:20 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Stopped Create System Users.
Dec 09 11:18:20 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 09 11:18:20 localhost systemd[1]: Finished Cleanup udev Database.
Dec 09 11:18:20 localhost systemd[1]: Reached target Switch Root.
Dec 09 11:18:20 localhost systemd[1]: Starting Switch Root...
Dec 09 11:18:20 localhost systemd[1]: Switching root.
Dec 09 11:18:20 localhost systemd-journald[309]: Journal stopped
Dec 09 11:18:22 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Dec 09 11:18:22 localhost kernel: audit: type=1404 audit(1765279101.185:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 09 11:18:22 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 11:18:22 localhost kernel: SELinux:  policy capability open_perms=1
Dec 09 11:18:22 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 11:18:22 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 09 11:18:22 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 11:18:22 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 11:18:22 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 11:18:22 localhost kernel: audit: type=1403 audit(1765279101.332:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 09 11:18:22 localhost systemd[1]: Successfully loaded SELinux policy in 168.289ms.
Dec 09 11:18:22 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 40.531ms.
Dec 09 11:18:22 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 09 11:18:22 localhost systemd[1]: Detected virtualization kvm.
Dec 09 11:18:22 localhost systemd[1]: Detected architecture x86-64.
Dec 09 11:18:22 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:18:22 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 09 11:18:22 localhost systemd[1]: Stopped Switch Root.
Dec 09 11:18:22 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 09 11:18:22 localhost systemd[1]: Created slice Slice /system/getty.
Dec 09 11:18:22 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 09 11:18:22 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 09 11:18:22 localhost systemd[1]: Created slice User and Session Slice.
Dec 09 11:18:22 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 09 11:18:22 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 09 11:18:22 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 09 11:18:22 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 09 11:18:22 localhost systemd[1]: Stopped target Switch Root.
Dec 09 11:18:22 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 09 11:18:22 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 09 11:18:22 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 09 11:18:22 localhost systemd[1]: Reached target Path Units.
Dec 09 11:18:22 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 09 11:18:22 localhost systemd[1]: Reached target Slice Units.
Dec 09 11:18:22 localhost systemd[1]: Reached target Swaps.
Dec 09 11:18:22 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 09 11:18:22 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 09 11:18:22 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 09 11:18:22 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 09 11:18:22 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 09 11:18:22 localhost systemd[1]: Listening on udev Control Socket.
Dec 09 11:18:22 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 09 11:18:22 localhost systemd[1]: Mounting Huge Pages File System...
Dec 09 11:18:22 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 09 11:18:22 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 09 11:18:22 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 09 11:18:22 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 09 11:18:22 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 09 11:18:22 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 11:18:22 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 09 11:18:22 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 09 11:18:22 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 09 11:18:22 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 09 11:18:22 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 09 11:18:22 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 09 11:18:22 localhost systemd[1]: Stopped Journal Service.
Dec 09 11:18:22 localhost systemd[1]: Starting Journal Service...
Dec 09 11:18:22 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 09 11:18:22 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 09 11:18:22 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 11:18:22 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 09 11:18:22 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 09 11:18:22 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 09 11:18:22 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 09 11:18:22 localhost systemd[1]: Mounted Huge Pages File System.
Dec 09 11:18:22 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 09 11:18:22 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 09 11:18:22 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 09 11:18:22 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 09 11:18:22 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 09 11:18:22 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 11:18:22 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 11:18:22 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 09 11:18:22 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 09 11:18:22 localhost systemd-journald[681]: Journal started
Dec 09 11:18:22 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 09 11:18:22 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 09 11:18:22 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 09 11:18:22 localhost systemd[1]: Started Journal Service.
Dec 09 11:18:22 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 09 11:18:22 localhost kernel: ACPI: bus type drm_connector registered
Dec 09 11:18:22 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 09 11:18:22 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 09 11:18:22 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 09 11:18:22 localhost kernel: fuse: init (API version 7.37)
Dec 09 11:18:22 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 09 11:18:22 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 09 11:18:22 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 09 11:18:22 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 09 11:18:22 localhost systemd[1]: Starting Create System Users...
Dec 09 11:18:22 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 09 11:18:22 localhost systemd-journald[681]: Received client request to flush runtime journal.
Dec 09 11:18:22 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 09 11:18:22 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 09 11:18:22 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 09 11:18:22 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 09 11:18:22 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 09 11:18:22 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 09 11:18:22 localhost systemd[1]: Mounting FUSE Control File System...
Dec 09 11:18:22 localhost systemd[1]: Mounted FUSE Control File System.
Dec 09 11:18:22 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 09 11:18:22 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 09 11:18:22 localhost systemd[1]: Finished Create System Users.
Dec 09 11:18:22 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 09 11:18:22 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 09 11:18:22 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 09 11:18:22 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 09 11:18:22 localhost systemd[1]: Reached target Local File Systems.
Dec 09 11:18:22 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 09 11:18:22 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 09 11:18:22 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 09 11:18:22 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 09 11:18:22 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 09 11:18:22 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 09 11:18:22 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 09 11:18:22 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Dec 09 11:18:22 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 09 11:18:22 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 09 11:18:22 localhost systemd[1]: Starting Security Auditing Service...
Dec 09 11:18:22 localhost systemd[1]: Starting RPC Bind...
Dec 09 11:18:22 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 09 11:18:22 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 09 11:18:22 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 09 11:18:22 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 09 11:18:22 localhost systemd[1]: Started RPC Bind.
Dec 09 11:18:22 localhost augenrules[710]: /sbin/augenrules: No change
Dec 09 11:18:22 localhost augenrules[725]: No rules
Dec 09 11:18:22 localhost augenrules[725]: enabled 1
Dec 09 11:18:22 localhost augenrules[725]: failure 1
Dec 09 11:18:22 localhost augenrules[725]: pid 705
Dec 09 11:18:22 localhost augenrules[725]: rate_limit 0
Dec 09 11:18:22 localhost augenrules[725]: backlog_limit 8192
Dec 09 11:18:22 localhost augenrules[725]: lost 0
Dec 09 11:18:22 localhost augenrules[725]: backlog 3
Dec 09 11:18:22 localhost augenrules[725]: backlog_wait_time 60000
Dec 09 11:18:22 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 09 11:18:22 localhost augenrules[725]: enabled 1
Dec 09 11:18:22 localhost augenrules[725]: failure 1
Dec 09 11:18:22 localhost augenrules[725]: pid 705
Dec 09 11:18:22 localhost augenrules[725]: rate_limit 0
Dec 09 11:18:22 localhost augenrules[725]: backlog_limit 8192
Dec 09 11:18:22 localhost augenrules[725]: lost 0
Dec 09 11:18:22 localhost augenrules[725]: backlog 4
Dec 09 11:18:22 localhost augenrules[725]: backlog_wait_time 60000
Dec 09 11:18:22 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 09 11:18:22 localhost augenrules[725]: enabled 1
Dec 09 11:18:22 localhost augenrules[725]: failure 1
Dec 09 11:18:22 localhost augenrules[725]: pid 705
Dec 09 11:18:22 localhost augenrules[725]: rate_limit 0
Dec 09 11:18:22 localhost augenrules[725]: backlog_limit 8192
Dec 09 11:18:22 localhost augenrules[725]: lost 0
Dec 09 11:18:22 localhost augenrules[725]: backlog 4
Dec 09 11:18:22 localhost augenrules[725]: backlog_wait_time 60000
Dec 09 11:18:22 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 09 11:18:22 localhost systemd[1]: Started Security Auditing Service.
Dec 09 11:18:22 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 09 11:18:22 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 09 11:18:23 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 09 11:18:23 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 09 11:18:23 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Dec 09 11:18:23 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 09 11:18:23 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 11:18:23 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 09 11:18:23 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 11:18:23 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 11:18:23 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 09 11:18:23 localhost systemd[1]: Starting Update is Completed...
Dec 09 11:18:23 localhost systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 11:18:23 localhost systemd[1]: Finished Update is Completed.
Dec 09 11:18:23 localhost systemd[1]: Reached target System Initialization.
Dec 09 11:18:23 localhost systemd[1]: Started dnf makecache --timer.
Dec 09 11:18:23 localhost systemd[1]: Started Daily rotation of log files.
Dec 09 11:18:23 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 09 11:18:23 localhost systemd[1]: Reached target Timer Units.
Dec 09 11:18:23 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 09 11:18:23 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 09 11:18:23 localhost systemd[1]: Reached target Socket Units.
Dec 09 11:18:23 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 09 11:18:23 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 11:18:23 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 09 11:18:23 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 09 11:18:23 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 09 11:18:23 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 09 11:18:23 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 09 11:18:23 localhost systemd[1]: Reached target Basic System.
Dec 09 11:18:23 localhost dbus-broker-lau[776]: Ready
Dec 09 11:18:23 localhost systemd[1]: Starting NTP client/server...
Dec 09 11:18:23 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 09 11:18:23 localhost chronyd[790]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 09 11:18:23 localhost chronyd[790]: Loaded 0 symmetric keys
Dec 09 11:18:23 localhost chronyd[790]: Using right/UTC timezone to obtain leap second data
Dec 09 11:18:23 localhost chronyd[790]: Loaded seccomp filter (level 2)
Dec 09 11:18:24 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 09 11:18:24 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 09 11:18:24 localhost systemd[1]: Started irqbalance daemon.
Dec 09 11:18:24 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 09 11:18:24 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 11:18:24 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 11:18:24 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 11:18:24 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 09 11:18:24 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 09 11:18:24 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 09 11:18:24 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 09 11:18:24 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 09 11:18:24 localhost kernel: Console: switching to colour dummy device 80x25
Dec 09 11:18:24 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 09 11:18:24 localhost kernel: [drm] features: -context_init
Dec 09 11:18:24 localhost kernel: [drm] number of scanouts: 1
Dec 09 11:18:24 localhost kernel: [drm] number of cap sets: 0
Dec 09 11:18:24 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 09 11:18:24 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 09 11:18:24 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 09 11:18:24 localhost systemd[1]: Starting User Login Management...
Dec 09 11:18:24 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 09 11:18:24 localhost systemd[1]: Started NTP client/server.
Dec 09 11:18:24 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 09 11:18:24 localhost kernel: kvm_amd: TSC scaling supported
Dec 09 11:18:24 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 09 11:18:24 localhost kernel: kvm_amd: Nested Paging enabled
Dec 09 11:18:24 localhost kernel: kvm_amd: LBR virtualization supported
Dec 09 11:18:24 localhost systemd-logind[799]: New seat seat0.
Dec 09 11:18:24 localhost systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 09 11:18:24 localhost systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 09 11:18:24 localhost systemd[1]: Started User Login Management.
Dec 09 11:18:24 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 09 11:18:24 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 09 11:18:24 localhost iptables.init[792]: iptables: Applying firewall rules: [  OK  ]
Dec 09 11:18:24 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 09 11:18:25 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 09 Dec 2025 11:18:25 +0000. Up 10.26 seconds.
Dec 09 11:18:25 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 09 11:18:25 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 09 11:18:25 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp4ejdhz1s.mount: Deactivated successfully.
Dec 09 11:18:25 localhost systemd[1]: Starting Hostname Service...
Dec 09 11:18:25 localhost systemd[1]: Started Hostname Service.
Dec 09 11:18:25 np0005551750.novalocal systemd-hostnamed[856]: Hostname set to <np0005551750.novalocal> (static)
Dec 09 11:18:25 np0005551750.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 09 11:18:25 np0005551750.novalocal systemd[1]: Reached target Preparation for Network.
Dec 09 11:18:25 np0005551750.novalocal systemd[1]: Starting Network Manager...
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.8810] NetworkManager (version 1.54.2-1.el9) is starting... (boot:3b8ce532-7834-4232-b208-67ea0773ffd0)
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.8816] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.8919] manager[0x562479b9f000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.8964] hostname: hostname: using hostnamed
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.8965] hostname: static hostname changed from (none) to "np0005551750.novalocal"
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.8970] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9143] manager[0x562479b9f000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9145] manager[0x562479b9f000]: rfkill: WWAN hardware radio set enabled
Dec 09 11:18:25 np0005551750.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9265] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9267] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9268] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9268] manager: Networking is enabled by state file
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9271] settings: Loaded settings plugin: keyfile (internal)
Dec 09 11:18:25 np0005551750.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9753] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9783] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9798] dhcp: init: Using DHCP client 'internal'
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9801] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9817] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 11:18:25 np0005551750.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9826] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9833] device (lo): Activation: starting connection 'lo' (8ff964e8-13df-4b37-96bf-869f14ef83b9)
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9844] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 11:18:25 np0005551750.novalocal NetworkManager[860]: <info>  [1765279105.9847] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Started Network Manager.
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0289] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0295] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0298] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0301] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0304] device (eth0): carrier: link connected
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0310] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Reached target Network.
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0321] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0337] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0342] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0343] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0345] manager: NetworkManager state is now CONNECTING
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0347] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0355] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0358] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0434] dhcp4 (eth0): state changed new lease, address=38.102.83.98
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0447] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0470] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0714] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0718] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0720] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0732] device (lo): Activation: successful, device activated.
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0744] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0751] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0756] device (eth0): Activation: successful, device activated.
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0762] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 11:18:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279106.0769] manager: startup complete
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Reached target NFS client services.
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Reached target Remote File Systems.
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 09 11:18:26 np0005551750.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 09 Dec 2025 11:18:26 +0000. Up 11.71 seconds.
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |  eth0  | True |         38.102.83.98         | 255.255.255.0 | global | fa:16:3e:92:8d:85 |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe92:8d85/64 |       .       |  link  | fa:16:3e:92:8d:85 |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 09 11:18:26 np0005551750.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 11:18:27 np0005551750.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Dec 09 11:18:27 np0005551750.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 09 11:18:27 np0005551750.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Dec 09 11:18:27 np0005551750.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Dec 09 11:18:27 np0005551750.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Dec 09 11:18:27 np0005551750.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Generating public/private rsa key pair.
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: The key fingerprint is:
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: SHA256:mih3dHVkUpp/mfkraN6V4BpfSRrri4fOVSi7AUkjt+E root@np0005551750.novalocal
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: The key's randomart image is:
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: +---[RSA 3072]----+
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |          ..+    |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |           *     |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |       . =+ .    |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |        =.=o  .+ |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |      . SE ..+=o |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |     o +  . +.B.o|
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |  . o +    +o= =.|
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |   o .    .+@oo .|
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |          +B.=o. |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: The key fingerprint is:
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: SHA256:Qq0776XAQBBzRTOSZAenwCeWlQ61OkoqTT3MknYmLzE root@np0005551750.novalocal
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: The key's randomart image is:
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: +---[ECDSA 256]---+
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: | .=*O*B          |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |  **+* +         |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: | . =+ . .        |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |   B.. .         |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: | .E O o S        |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |o= X + o         |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |+ o . =   .      |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |.  .   + o       |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |       .+        |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: The key fingerprint is:
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: SHA256:jEu5jBclInUGkiMi8ZGLhkp0dEgMvw3lSOs5gW/neKs root@np0005551750.novalocal
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: The key's randomart image is:
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: +--[ED25519 256]--+
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |.oB*++o          |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |+oBB*o           |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |=++O... .        |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |oo+.*. *         |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |+  B o= S        |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |. . =+ +         |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |   ..o=          |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |    ...          |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: |   E..           |
Dec 09 11:18:27 np0005551750.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Reached target Network is Online.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Starting System Logging Service...
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 09 11:18:27 np0005551750.novalocal sm-notify[1006]: Version 2.5.4 starting
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Starting Permit User Sessions...
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 09 11:18:27 np0005551750.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Dec 09 11:18:27 np0005551750.novalocal sshd[1008]: Server listening on :: port 22.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Finished Permit User Sessions.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Started Command Scheduler.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Started Getty on tty1.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 09 11:18:27 np0005551750.novalocal crond[1012]: (CRON) STARTUP (1.5.7)
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Reached target Login Prompts.
Dec 09 11:18:27 np0005551750.novalocal crond[1012]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 09 11:18:27 np0005551750.novalocal crond[1012]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 83% if used.)
Dec 09 11:18:27 np0005551750.novalocal crond[1012]: (CRON) INFO (running with inotify support)
Dec 09 11:18:27 np0005551750.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Started System Logging Service.
Dec 09 11:18:27 np0005551750.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Reached target Multi-User System.
Dec 09 11:18:27 np0005551750.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1021]: Unable to negotiate with 38.102.83.114 port 45118: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 09 11:18:28 np0005551750.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 09 11:18:28 np0005551750.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1038]: Unable to negotiate with 38.102.83.114 port 45132: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1047]: Unable to negotiate with 38.102.83.114 port 45144: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1011]: Connection closed by 38.102.83.114 port 45116 [preauth]
Dec 09 11:18:28 np0005551750.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1067]: Connection closed by 38.102.83.114 port 45166 [preauth]
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1078]: Unable to negotiate with 38.102.83.114 port 45172: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1030]: Connection closed by 38.102.83.114 port 45128 [preauth]
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1080]: Unable to negotiate with 38.102.83.114 port 45176: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 09 11:18:28 np0005551750.novalocal kdumpctl[1022]: kdump: No kdump initial ramdisk found.
Dec 09 11:18:28 np0005551750.novalocal kdumpctl[1022]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 09 11:18:28 np0005551750.novalocal sshd-session[1057]: Connection closed by 38.102.83.114 port 45160 [preauth]
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1153]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 09 Dec 2025 11:18:28 +0000. Up 13.46 seconds.
Dec 09 11:18:28 np0005551750.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 09 11:18:28 np0005551750.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 09 11:18:28 np0005551750.novalocal dracut[1285]: dracut-057-102.git20250818.el9
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1303]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 09 Dec 2025 11:18:28 +0000. Up 13.91 seconds.
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1309]: #############################################################
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1312]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 09 11:18:28 np0005551750.novalocal dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1318]: 256 SHA256:Qq0776XAQBBzRTOSZAenwCeWlQ61OkoqTT3MknYmLzE root@np0005551750.novalocal (ECDSA)
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1326]: 256 SHA256:jEu5jBclInUGkiMi8ZGLhkp0dEgMvw3lSOs5gW/neKs root@np0005551750.novalocal (ED25519)
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1332]: 3072 SHA256:mih3dHVkUpp/mfkraN6V4BpfSRrri4fOVSi7AUkjt+E root@np0005551750.novalocal (RSA)
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1334]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1336]: #############################################################
Dec 09 11:18:28 np0005551750.novalocal cloud-init[1303]: Cloud-init v. 24.4-7.el9 finished at Tue, 09 Dec 2025 11:18:28 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 14.09 seconds
Dec 09 11:18:29 np0005551750.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 09 11:18:29 np0005551750.novalocal systemd[1]: Reached target Cloud-init target.
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 09 11:18:29 np0005551750.novalocal dracut[1287]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: memstrack is not available
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: memstrack is not available
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: *** Including module: systemd ***
Dec 09 11:18:30 np0005551750.novalocal dracut[1287]: *** Including module: fips ***
Dec 09 11:18:30 np0005551750.novalocal chronyd[790]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Dec 09 11:18:30 np0005551750.novalocal chronyd[790]: System clock TAI offset set to 37 seconds
Dec 09 11:18:31 np0005551750.novalocal dracut[1287]: *** Including module: systemd-initrd ***
Dec 09 11:18:31 np0005551750.novalocal dracut[1287]: *** Including module: i18n ***
Dec 09 11:18:31 np0005551750.novalocal dracut[1287]: *** Including module: drm ***
Dec 09 11:18:32 np0005551750.novalocal dracut[1287]: *** Including module: prefixdevname ***
Dec 09 11:18:32 np0005551750.novalocal dracut[1287]: *** Including module: kernel-modules ***
Dec 09 11:18:32 np0005551750.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 09 11:18:32 np0005551750.novalocal chronyd[790]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Dec 09 11:18:32 np0005551750.novalocal dracut[1287]: *** Including module: kernel-modules-extra ***
Dec 09 11:18:32 np0005551750.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 09 11:18:32 np0005551750.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 09 11:18:32 np0005551750.novalocal dracut[1287]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 09 11:18:32 np0005551750.novalocal dracut[1287]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: *** Including module: qemu ***
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: *** Including module: fstab-sys ***
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: *** Including module: rootfs-block ***
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: *** Including module: terminfo ***
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: *** Including module: udev-rules ***
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: Skipping udev rule: 91-permissions.rules
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: *** Including module: virtiofs ***
Dec 09 11:18:33 np0005551750.novalocal dracut[1287]: *** Including module: dracut-systemd ***
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]: *** Including module: usrmount ***
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]: *** Including module: base ***
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]: *** Including module: fs-lib ***
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]: *** Including module: kdumpbase ***
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: IRQ 25 affinity is now unmanaged
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: IRQ 31 affinity is now unmanaged
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: IRQ 28 affinity is now unmanaged
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: IRQ 32 affinity is now unmanaged
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: IRQ 30 affinity is now unmanaged
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 09 11:18:34 np0005551750.novalocal irqbalance[794]: IRQ 29 affinity is now unmanaged
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]:   microcode_ctl module: mangling fw_dir
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel" is ignored
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 09 11:18:34 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]: *** Including module: openssl ***
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]: *** Including module: shutdown ***
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]: *** Including module: squash ***
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]: *** Including modules done ***
Dec 09 11:18:35 np0005551750.novalocal dracut[1287]: *** Installing kernel module dependencies ***
Dec 09 11:18:36 np0005551750.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 11:18:36 np0005551750.novalocal dracut[1287]: *** Installing kernel module dependencies done ***
Dec 09 11:18:36 np0005551750.novalocal dracut[1287]: *** Resolving executable dependencies ***
Dec 09 11:18:38 np0005551750.novalocal dracut[1287]: *** Resolving executable dependencies done ***
Dec 09 11:18:38 np0005551750.novalocal dracut[1287]: *** Generating early-microcode cpio image ***
Dec 09 11:18:38 np0005551750.novalocal dracut[1287]: *** Store current command line parameters ***
Dec 09 11:18:38 np0005551750.novalocal dracut[1287]: Stored kernel commandline:
Dec 09 11:18:38 np0005551750.novalocal dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Dec 09 11:18:38 np0005551750.novalocal dracut[1287]: *** Install squash loader ***
Dec 09 11:18:39 np0005551750.novalocal dracut[1287]: *** Squashing the files inside the initramfs ***
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: *** Squashing the files inside the initramfs done ***
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: *** Hardlinking files ***
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: Mode:           real
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: Files:          50
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: Linked:         0 files
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: Compared:       0 xattrs
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: Compared:       0 files
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: Saved:          0 B
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: Duration:       0.000795 seconds
Dec 09 11:18:40 np0005551750.novalocal dracut[1287]: *** Hardlinking files done ***
Dec 09 11:18:41 np0005551750.novalocal dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 09 11:18:42 np0005551750.novalocal kdumpctl[1022]: kdump: kexec: loaded kdump kernel
Dec 09 11:18:42 np0005551750.novalocal kdumpctl[1022]: kdump: Starting kdump: [OK]
Dec 09 11:18:42 np0005551750.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 09 11:18:42 np0005551750.novalocal systemd[1]: Startup finished in 3.136s (kernel) + 3.254s (initrd) + 21.324s (userspace) = 27.715s.
Dec 09 11:18:55 np0005551750.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 11:21:30 np0005551750.novalocal sshd-session[4299]: Accepted publickey for zuul from 38.102.83.114 port 43072 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 09 11:21:30 np0005551750.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 09 11:21:30 np0005551750.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 09 11:21:30 np0005551750.novalocal systemd-logind[799]: New session 1 of user zuul.
Dec 09 11:21:30 np0005551750.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 09 11:21:30 np0005551750.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Queued start job for default target Main User Target.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Created slice User Application Slice.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Reached target Paths.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Reached target Timers.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Starting D-Bus User Message Bus Socket...
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Starting Create User's Volatile Files and Directories...
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Finished Create User's Volatile Files and Directories.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Listening on D-Bus User Message Bus Socket.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Reached target Sockets.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Reached target Basic System.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Reached target Main User Target.
Dec 09 11:21:30 np0005551750.novalocal systemd[4303]: Startup finished in 130ms.
Dec 09 11:21:30 np0005551750.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 09 11:21:30 np0005551750.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 09 11:21:30 np0005551750.novalocal sshd-session[4299]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:21:31 np0005551750.novalocal python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:21:33 np0005551750.novalocal python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:21:41 np0005551750.novalocal python3[4472]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:21:42 np0005551750.novalocal python3[4512]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 09 11:21:44 np0005551750.novalocal python3[4538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaJwUlvzslqBC9oJ54J96blk/LxvtXwiGxDjJbSsaQlD4cZWrSczh3sb3+xOt4Xb702iZVu8z2EUunymkfs0nAcHA7gh24e8T6OG+wUkhw75XnGls8WzuXqhVjfOQhgWEtpMjz4x0sSF+MTa/bF6juebcm9DQ5r4gmg2khN/Q7EWHQbirLTKwS9BXq8WfFEC9S6FFUkNHUrZOedr1T3MNkdo36DLwSKaruH+3i+iGFhLT3RcPoYSY+rlFuTfjn/jdcd1RYVZO5sgevmmWvNoCF2Mb8BbtvHSVaCmlpN56sxM/7hASjC4nK5hEoHiUBfKp8gLcYnFS0MWTK6XMiLdR08F32yU0lDNWtph5VqB0839PdXRu8ykoIPofAN7bbLGI9JxqASKK+esB62CtZlEZ0rcKrpyw8Soda483oLRoymmpDvg4Tv8caROzdx5fVFsgThLtyq2TqQPLojYNwaLxVNdpEyBVkIxpS0onDF2RTdRiX5ySoX5aFICX9BY887Ks= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:45 np0005551750.novalocal python3[4562]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:45 np0005551750.novalocal python3[4661]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:21:45 np0005551750.novalocal python3[4732]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765279305.3256145-251-139735161406282/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=915e9096169f46e280ef0d11b848cd7e_id_rsa follow=False checksum=053a3af0dc9d49f8e3938c04479eaf1ea285ecf3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:46 np0005551750.novalocal python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:21:46 np0005551750.novalocal python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765279306.238518-306-155470089424710/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=915e9096169f46e280ef0d11b848cd7e_id_rsa.pub follow=False checksum=758d5d002cefaccea8c7816ee9db4fe76868f411 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:48 np0005551750.novalocal python3[4974]: ansible-ping Invoked with data=pong
Dec 09 11:21:49 np0005551750.novalocal python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:21:51 np0005551750.novalocal python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 09 11:21:52 np0005551750.novalocal python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:52 np0005551750.novalocal python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:52 np0005551750.novalocal python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:53 np0005551750.novalocal python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:53 np0005551750.novalocal python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:53 np0005551750.novalocal python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:55 np0005551750.novalocal sudo[5232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqrktzhbsfsokwnixakjkqzsertfvmeb ; /usr/bin/python3'
Dec 09 11:21:55 np0005551750.novalocal sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:21:55 np0005551750.novalocal python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:55 np0005551750.novalocal sudo[5232]: pam_unix(sudo:session): session closed for user root
Dec 09 11:21:56 np0005551750.novalocal sudo[5310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtltdrngzmpbfhxgnijnqkjkvxfawaay ; /usr/bin/python3'
Dec 09 11:21:56 np0005551750.novalocal sudo[5310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:21:56 np0005551750.novalocal python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:21:56 np0005551750.novalocal sudo[5310]: pam_unix(sudo:session): session closed for user root
Dec 09 11:21:56 np0005551750.novalocal sudo[5383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frgmlierwjxqgszbquwtnrfzgooaythp ; /usr/bin/python3'
Dec 09 11:21:56 np0005551750.novalocal sudo[5383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:21:56 np0005551750.novalocal python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765279315.738603-31-34415168815244/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:21:56 np0005551750.novalocal sudo[5383]: pam_unix(sudo:session): session closed for user root
Dec 09 11:21:57 np0005551750.novalocal python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:57 np0005551750.novalocal python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:57 np0005551750.novalocal python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:58 np0005551750.novalocal python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:58 np0005551750.novalocal python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:58 np0005551750.novalocal python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:59 np0005551750.novalocal python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:59 np0005551750.novalocal python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:59 np0005551750.novalocal python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:21:59 np0005551750.novalocal python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:00 np0005551750.novalocal python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:00 np0005551750.novalocal python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:00 np0005551750.novalocal python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:00 np0005551750.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:01 np0005551750.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:01 np0005551750.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:01 np0005551750.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:02 np0005551750.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:02 np0005551750.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:02 np0005551750.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:03 np0005551750.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:03 np0005551750.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:03 np0005551750.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:03 np0005551750.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:04 np0005551750.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:04 np0005551750.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:22:06 np0005551750.novalocal sudo[6057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzqtkajnaysovzfuiqidmbstlnpsskbr ; /usr/bin/python3'
Dec 09 11:22:06 np0005551750.novalocal sudo[6057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:22:06 np0005551750.novalocal python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 09 11:22:06 np0005551750.novalocal systemd[1]: Starting Time & Date Service...
Dec 09 11:22:06 np0005551750.novalocal systemd[1]: Started Time & Date Service.
Dec 09 11:22:06 np0005551750.novalocal systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Dec 09 11:22:06 np0005551750.novalocal sudo[6057]: pam_unix(sudo:session): session closed for user root
Dec 09 11:22:07 np0005551750.novalocal sudo[6088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuvutxxerkaswzufnrnvdrcwvueayjxx ; /usr/bin/python3'
Dec 09 11:22:07 np0005551750.novalocal sudo[6088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:22:07 np0005551750.novalocal python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:22:07 np0005551750.novalocal sudo[6088]: pam_unix(sudo:session): session closed for user root
Dec 09 11:22:07 np0005551750.novalocal python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:22:08 np0005551750.novalocal python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765279327.4919908-251-88239340768457/source _original_basename=tmp2d5gmqd5 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:22:08 np0005551750.novalocal python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:22:08 np0005551750.novalocal python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765279328.3381853-301-1278522035753/source _original_basename=tmp2apk4n2c follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:22:09 np0005551750.novalocal sudo[6508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcqonfaufrvukihdurjotrywugtmlefu ; /usr/bin/python3'
Dec 09 11:22:09 np0005551750.novalocal sudo[6508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:22:09 np0005551750.novalocal python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:22:09 np0005551750.novalocal sudo[6508]: pam_unix(sudo:session): session closed for user root
Dec 09 11:22:10 np0005551750.novalocal sudo[6581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocerokadekzzvcuunkppnkybmhavjhsz ; /usr/bin/python3'
Dec 09 11:22:10 np0005551750.novalocal sudo[6581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:22:10 np0005551750.novalocal python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765279329.525641-381-1891107923580/source _original_basename=tmp9lcbcupe follow=False checksum=bd525bf2100f6176f2da7b3ae03ee4707d4592f7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:22:10 np0005551750.novalocal sudo[6581]: pam_unix(sudo:session): session closed for user root
Dec 09 11:22:10 np0005551750.novalocal python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:22:10 np0005551750.novalocal python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:22:11 np0005551750.novalocal sudo[6735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpjxyrocsuyhdlfnuyweqasxqutekbtp ; /usr/bin/python3'
Dec 09 11:22:11 np0005551750.novalocal sudo[6735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:22:11 np0005551750.novalocal python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:22:11 np0005551750.novalocal sudo[6735]: pam_unix(sudo:session): session closed for user root
Dec 09 11:22:11 np0005551750.novalocal sudo[6808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwaniewnqxzzfbyyznladexayydagyh ; /usr/bin/python3'
Dec 09 11:22:11 np0005551750.novalocal sudo[6808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:22:11 np0005551750.novalocal python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765279331.0840032-451-129972711944038/source _original_basename=tmpinr60r_n follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:22:11 np0005551750.novalocal sudo[6808]: pam_unix(sudo:session): session closed for user root
Dec 09 11:22:12 np0005551750.novalocal sudo[6859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxclsuomhnlbifvfxvrtlymfzxkuskc ; /usr/bin/python3'
Dec 09 11:22:12 np0005551750.novalocal sudo[6859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:22:12 np0005551750.novalocal python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-7071-ee91-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:22:12 np0005551750.novalocal sudo[6859]: pam_unix(sudo:session): session closed for user root
Dec 09 11:22:13 np0005551750.novalocal python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-7071-ee91-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 09 11:22:14 np0005551750.novalocal python3[6917]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:22:34 np0005551750.novalocal sudo[6941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdcblwxtgyqiifistgqaassbzmyzzwnz ; /usr/bin/python3'
Dec 09 11:22:34 np0005551750.novalocal sudo[6941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:22:34 np0005551750.novalocal python3[6943]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:22:34 np0005551750.novalocal sudo[6941]: pam_unix(sudo:session): session closed for user root
Dec 09 11:22:36 np0005551750.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 09 11:23:14 np0005551750.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 09 11:23:14 np0005551750.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1663] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 11:23:14 np0005551750.novalocal systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1843] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1875] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1878] device (eth1): carrier: link connected
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1880] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1885] policy: auto-activating connection 'Wired connection 1' (17c7b7a5-04f6-3c3d-903e-30cdf5d51276)
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1889] device (eth1): Activation: starting connection 'Wired connection 1' (17c7b7a5-04f6-3c3d-903e-30cdf5d51276)
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1890] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1892] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1897] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:23:14 np0005551750.novalocal NetworkManager[860]: <info>  [1765279394.1901] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:23:14 np0005551750.novalocal python3[6973]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-faa5-87c7-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:23:24 np0005551750.novalocal sudo[7051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jadhwuvagxmkhjehwrlvxmvdjmpohdbr ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 11:23:24 np0005551750.novalocal sudo[7051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:23:25 np0005551750.novalocal python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:23:25 np0005551750.novalocal sudo[7051]: pam_unix(sudo:session): session closed for user root
Dec 09 11:23:25 np0005551750.novalocal sudo[7124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbptvtmwvrosnpdhvjpreqrckzaaifen ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 11:23:25 np0005551750.novalocal sudo[7124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:23:25 np0005551750.novalocal python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765279404.6837564-104-5336648170474/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=da70b728709a88acc7608d1cb987772768ee67bc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:23:25 np0005551750.novalocal sudo[7124]: pam_unix(sudo:session): session closed for user root
Dec 09 11:23:25 np0005551750.novalocal sudo[7174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfnyvjkxjbrdevqchwatugdllfeoclhl ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 11:23:25 np0005551750.novalocal sudo[7174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:23:26 np0005551750.novalocal python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Stopping Network Manager...
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279406.2245] caught SIGTERM, shutting down normally.
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279406.2260] dhcp4 (eth0): canceled DHCP transaction
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279406.2260] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279406.2260] dhcp4 (eth0): state changed no lease
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279406.2262] manager: NetworkManager state is now CONNECTING
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279406.2337] dhcp4 (eth1): canceled DHCP transaction
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279406.2337] dhcp4 (eth1): state changed no lease
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[860]: <info>  [1765279406.2578] exiting (success)
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Stopped Network Manager.
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: NetworkManager.service: Consumed 2.139s CPU time, 10.0M memory peak.
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Starting Network Manager...
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.3405] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:3b8ce532-7834-4232-b208-67ea0773ffd0)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.3406] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.3461] manager[0x558012dd4000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Starting Hostname Service...
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Started Hostname Service.
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4193] hostname: hostname: using hostnamed
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4194] hostname: static hostname changed from (none) to "np0005551750.novalocal"
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4200] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4206] manager[0x558012dd4000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4207] manager[0x558012dd4000]: rfkill: WWAN hardware radio set enabled
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4243] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4243] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4244] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4245] manager: Networking is enabled by state file
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4248] settings: Loaded settings plugin: keyfile (internal)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4253] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4281] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4291] dhcp: init: Using DHCP client 'internal'
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4294] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4299] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4305] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4313] device (lo): Activation: starting connection 'lo' (8ff964e8-13df-4b37-96bf-869f14ef83b9)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4320] device (eth0): carrier: link connected
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4325] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4330] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4331] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4337] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4343] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4349] device (eth1): carrier: link connected
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4354] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4359] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (17c7b7a5-04f6-3c3d-903e-30cdf5d51276) (indicated)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4359] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4364] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4370] device (eth1): Activation: starting connection 'Wired connection 1' (17c7b7a5-04f6-3c3d-903e-30cdf5d51276)
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Started Network Manager.
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4377] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4381] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4383] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4385] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4388] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4391] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4393] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4395] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4399] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4405] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4410] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4419] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4422] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4441] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4448] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4457] device (lo): Activation: successful, device activated.
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4468] dhcp4 (eth0): state changed new lease, address=38.102.83.98
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4478] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 11:23:26 np0005551750.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 09 11:23:26 np0005551750.novalocal sudo[7174]: pam_unix(sudo:session): session closed for user root
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4846] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4871] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4872] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4875] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4878] device (eth0): Activation: successful, device activated.
Dec 09 11:23:26 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279406.4883] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 11:23:26 np0005551750.novalocal python3[7261]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-faa5-87c7-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:23:36 np0005551750.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 11:23:56 np0005551750.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 11:24:03 np0005551750.novalocal systemd[4303]: Starting Mark boot as successful...
Dec 09 11:24:03 np0005551750.novalocal systemd[4303]: Finished Mark boot as successful.
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.7798] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 11:24:11 np0005551750.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 11:24:11 np0005551750.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8045] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8050] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8059] device (eth1): Activation: successful, device activated.
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8066] manager: startup complete
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8068] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <warn>  [1765279451.8082] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8090] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 09 11:24:11 np0005551750.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8207] dhcp4 (eth1): canceled DHCP transaction
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8212] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8213] dhcp4 (eth1): state changed no lease
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8246] policy: auto-activating connection 'ci-private-network' (b7cdfc62-b3ac-5a41-99f8-23b040034403)
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8256] device (eth1): Activation: starting connection 'ci-private-network' (b7cdfc62-b3ac-5a41-99f8-23b040034403)
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8259] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8268] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8283] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8303] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8358] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8361] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:24:11 np0005551750.novalocal NetworkManager[7193]: <info>  [1765279451.8376] device (eth1): Activation: successful, device activated.
Dec 09 11:24:21 np0005551750.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 11:24:26 np0005551750.novalocal sshd-session[4313]: Received disconnect from 38.102.83.114 port 43072:11: disconnected by user
Dec 09 11:24:26 np0005551750.novalocal sshd-session[4313]: Disconnected from user zuul 38.102.83.114 port 43072
Dec 09 11:24:26 np0005551750.novalocal sshd-session[4299]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:24:26 np0005551750.novalocal systemd-logind[799]: Session 1 logged out. Waiting for processes to exit.
Dec 09 11:25:26 np0005551750.novalocal sshd-session[7292]: Accepted publickey for zuul from 38.102.83.114 port 36960 ssh2: RSA SHA256:6Ie4ZXK9Ek36UC2sJEF3TJKSrACzyJGKSwiteASgUXs
Dec 09 11:25:26 np0005551750.novalocal systemd-logind[799]: New session 3 of user zuul.
Dec 09 11:25:26 np0005551750.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 09 11:25:26 np0005551750.novalocal sshd-session[7292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:25:26 np0005551750.novalocal sudo[7371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwdqvxshhdqbvpvmfcghqjumkqdyqsqz ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 11:25:26 np0005551750.novalocal sudo[7371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:25:27 np0005551750.novalocal python3[7373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:25:27 np0005551750.novalocal sudo[7371]: pam_unix(sudo:session): session closed for user root
Dec 09 11:25:27 np0005551750.novalocal sudo[7444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifsrpwsdrdglwgoxvnmccjffxpvekikd ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 11:25:27 np0005551750.novalocal sudo[7444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:25:27 np0005551750.novalocal python3[7446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765279526.733586-373-214824050303546/source _original_basename=tmpiz88tuql follow=False checksum=9f44f561ffacd74a44c8561b9a5e0d09a035d2e2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:25:27 np0005551750.novalocal sudo[7444]: pam_unix(sudo:session): session closed for user root
Dec 09 11:25:31 np0005551750.novalocal sshd-session[7295]: Connection closed by 38.102.83.114 port 36960
Dec 09 11:25:31 np0005551750.novalocal sshd-session[7292]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:25:31 np0005551750.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 09 11:25:31 np0005551750.novalocal systemd-logind[799]: Session 3 logged out. Waiting for processes to exit.
Dec 09 11:25:31 np0005551750.novalocal systemd-logind[799]: Removed session 3.
Dec 09 11:27:03 np0005551750.novalocal systemd[4303]: Created slice User Background Tasks Slice.
Dec 09 11:27:03 np0005551750.novalocal systemd[4303]: Starting Cleanup of User's Temporary Files and Directories...
Dec 09 11:27:03 np0005551750.novalocal systemd[4303]: Finished Cleanup of User's Temporary Files and Directories.
Dec 09 11:28:16 np0005551750.novalocal sshd-session[7473]: Received disconnect from 193.46.255.7 port 28690:11:  [preauth]
Dec 09 11:28:16 np0005551750.novalocal sshd-session[7473]: Disconnected from authenticating user root 193.46.255.7 port 28690 [preauth]
Dec 09 11:31:06 np0005551750.novalocal sshd-session[7477]: Accepted publickey for zuul from 38.102.83.114 port 41728 ssh2: RSA SHA256:6Ie4ZXK9Ek36UC2sJEF3TJKSrACzyJGKSwiteASgUXs
Dec 09 11:31:06 np0005551750.novalocal systemd-logind[799]: New session 4 of user zuul.
Dec 09 11:31:06 np0005551750.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 09 11:31:06 np0005551750.novalocal sshd-session[7477]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:31:06 np0005551750.novalocal sudo[7504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgsvahixmwglcuycmywgrpgrovesfrkf ; /usr/bin/python3'
Dec 09 11:31:06 np0005551750.novalocal sudo[7504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:06 np0005551750.novalocal python3[7506]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-a67c-bc30-000000001f0d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:31:06 np0005551750.novalocal sudo[7504]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:06 np0005551750.novalocal sudo[7532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhtjcukayehpajwlfykndhzciiztubjc ; /usr/bin/python3'
Dec 09 11:31:06 np0005551750.novalocal sudo[7532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:06 np0005551750.novalocal python3[7534]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:31:06 np0005551750.novalocal sudo[7532]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:06 np0005551750.novalocal sudo[7559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozcfzgaldbxpunpfuvclsocflihrykul ; /usr/bin/python3'
Dec 09 11:31:06 np0005551750.novalocal sudo[7559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:07 np0005551750.novalocal python3[7561]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:31:07 np0005551750.novalocal sudo[7559]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:07 np0005551750.novalocal sudo[7585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzdyvtyauavrvrbnmbyottaduhxfcrgt ; /usr/bin/python3'
Dec 09 11:31:07 np0005551750.novalocal sudo[7585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:07 np0005551750.novalocal python3[7587]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:31:07 np0005551750.novalocal sudo[7585]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:07 np0005551750.novalocal sudo[7611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jerqzyqeanvatuitgyjdqxdwrfpurtct ; /usr/bin/python3'
Dec 09 11:31:07 np0005551750.novalocal sudo[7611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:07 np0005551750.novalocal python3[7613]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:31:07 np0005551750.novalocal sudo[7611]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:08 np0005551750.novalocal sudo[7637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-citybuerfcrlsqisuubisawcazehgtsi ; /usr/bin/python3'
Dec 09 11:31:08 np0005551750.novalocal sudo[7637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:08 np0005551750.novalocal python3[7639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:31:08 np0005551750.novalocal sudo[7637]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:08 np0005551750.novalocal sudo[7715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkfhssbdmgkiposjibmospmhzwsgedz ; /usr/bin/python3'
Dec 09 11:31:08 np0005551750.novalocal sudo[7715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:08 np0005551750.novalocal python3[7717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:31:08 np0005551750.novalocal sudo[7715]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:08 np0005551750.novalocal sudo[7788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvwgqrcvqhixsqlgbupxkcwmggktuxu ; /usr/bin/python3'
Dec 09 11:31:08 np0005551750.novalocal sudo[7788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:09 np0005551750.novalocal python3[7790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765279868.4649942-523-28454806844949/source _original_basename=tmpx5uiv3fu follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:31:09 np0005551750.novalocal sudo[7788]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:09 np0005551750.novalocal sudo[7838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpwlivoslzmskwgphgfzoebgpdhdoody ; /usr/bin/python3'
Dec 09 11:31:09 np0005551750.novalocal sudo[7838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:10 np0005551750.novalocal python3[7840]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 11:31:10 np0005551750.novalocal systemd[1]: Reloading.
Dec 09 11:31:10 np0005551750.novalocal systemd-rc-local-generator[7859]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:31:10 np0005551750.novalocal sudo[7838]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:11 np0005551750.novalocal sudo[7894]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmfiwgwcxhpiubbtfnjzofxzfaoocxis ; /usr/bin/python3'
Dec 09 11:31:11 np0005551750.novalocal sudo[7894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:12 np0005551750.novalocal python3[7896]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 09 11:31:12 np0005551750.novalocal sudo[7894]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:12 np0005551750.novalocal sudo[7920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwoutzrijhuwfcggjwpbwdtykcniaqay ; /usr/bin/python3'
Dec 09 11:31:12 np0005551750.novalocal sudo[7920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:12 np0005551750.novalocal python3[7922]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:31:12 np0005551750.novalocal sudo[7920]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:12 np0005551750.novalocal sudo[7948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvdxbdbpwusiuyvogtkiwlwxfhgityuf ; /usr/bin/python3'
Dec 09 11:31:12 np0005551750.novalocal sudo[7948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:12 np0005551750.novalocal python3[7950]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:31:12 np0005551750.novalocal sudo[7948]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:13 np0005551750.novalocal sudo[7976]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvbkgfpljyeicqzgaxwocmnsjfgukzku ; /usr/bin/python3'
Dec 09 11:31:13 np0005551750.novalocal sudo[7976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:13 np0005551750.novalocal python3[7978]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:31:13 np0005551750.novalocal sudo[7976]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:13 np0005551750.novalocal sudo[8004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egpqtsbbmkxrumdxjoxtnbxdhjhytdhe ; /usr/bin/python3'
Dec 09 11:31:13 np0005551750.novalocal sudo[8004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:13 np0005551750.novalocal python3[8006]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:31:13 np0005551750.novalocal sudo[8004]: pam_unix(sudo:session): session closed for user root
Dec 09 11:31:14 np0005551750.novalocal python3[8033]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-a67c-bc30-000000001f14-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:31:14 np0005551750.novalocal python3[8063]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 11:31:14 np0005551750.novalocal irqbalance[794]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 09 11:31:14 np0005551750.novalocal irqbalance[794]: IRQ 26 affinity is now unmanaged
Dec 09 11:31:17 np0005551750.novalocal sshd-session[7480]: Connection closed by 38.102.83.114 port 41728
Dec 09 11:31:17 np0005551750.novalocal sshd-session[7477]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:31:17 np0005551750.novalocal systemd-logind[799]: Session 4 logged out. Waiting for processes to exit.
Dec 09 11:31:17 np0005551750.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 09 11:31:17 np0005551750.novalocal systemd[1]: session-4.scope: Consumed 4.228s CPU time.
Dec 09 11:31:17 np0005551750.novalocal systemd-logind[799]: Removed session 4.
Dec 09 11:31:19 np0005551750.novalocal sshd-session[8067]: Accepted publickey for zuul from 38.102.83.114 port 50672 ssh2: RSA SHA256:6Ie4ZXK9Ek36UC2sJEF3TJKSrACzyJGKSwiteASgUXs
Dec 09 11:31:19 np0005551750.novalocal systemd-logind[799]: New session 5 of user zuul.
Dec 09 11:31:19 np0005551750.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 09 11:31:19 np0005551750.novalocal sshd-session[8067]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:31:19 np0005551750.novalocal sudo[8094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgnhnsqajhavcurwakdhwsbdfvnofdlz ; /usr/bin/python3'
Dec 09 11:31:19 np0005551750.novalocal sudo[8094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:31:19 np0005551750.novalocal python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 11:31:24 np0005551750.novalocal irqbalance[794]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 09 11:31:24 np0005551750.novalocal irqbalance[794]: IRQ 27 affinity is now unmanaged
Dec 09 11:31:42 np0005551750.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 09 11:31:42 np0005551750.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 11:31:42 np0005551750.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 11:31:42 np0005551750.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 11:31:42 np0005551750.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 11:31:42 np0005551750.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 11:31:42 np0005551750.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 11:31:42 np0005551750.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 11:31:53 np0005551750.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 09 11:31:53 np0005551750.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 11:31:53 np0005551750.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 11:31:53 np0005551750.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 11:31:53 np0005551750.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 11:31:53 np0005551750.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 11:31:53 np0005551750.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 11:31:53 np0005551750.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 11:32:03 np0005551750.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 09 11:32:03 np0005551750.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 11:32:03 np0005551750.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 11:32:03 np0005551750.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 11:32:03 np0005551750.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 11:32:03 np0005551750.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 11:32:03 np0005551750.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 11:32:03 np0005551750.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 11:32:04 np0005551750.novalocal setsebool[8164]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 09 11:32:04 np0005551750.novalocal setsebool[8164]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 09 11:32:17 np0005551750.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 09 11:32:17 np0005551750.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 11:32:17 np0005551750.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 11:32:17 np0005551750.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 11:32:17 np0005551750.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 11:32:17 np0005551750.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 11:32:17 np0005551750.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 11:32:17 np0005551750.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 11:32:37 np0005551750.novalocal dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 09 11:32:37 np0005551750.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 11:32:37 np0005551750.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 09 11:32:37 np0005551750.novalocal systemd[1]: Reloading.
Dec 09 11:32:37 np0005551750.novalocal systemd-rc-local-generator[8920]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:32:37 np0005551750.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 11:32:39 np0005551750.novalocal sudo[8094]: pam_unix(sudo:session): session closed for user root
Dec 09 11:32:44 np0005551750.novalocal python3[13989]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-8e44-d85c-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:32:45 np0005551750.novalocal kernel: evm: overlay not supported
Dec 09 11:32:45 np0005551750.novalocal systemd[4303]: Starting D-Bus User Message Bus...
Dec 09 11:32:45 np0005551750.novalocal dbus-broker-launch[14434]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 09 11:32:45 np0005551750.novalocal dbus-broker-launch[14434]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 09 11:32:45 np0005551750.novalocal systemd[4303]: Started D-Bus User Message Bus.
Dec 09 11:32:45 np0005551750.novalocal dbus-broker-lau[14434]: Ready
Dec 09 11:32:45 np0005551750.novalocal systemd[4303]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 09 11:32:45 np0005551750.novalocal systemd[4303]: Created slice Slice /user.
Dec 09 11:32:45 np0005551750.novalocal systemd[4303]: podman-14356.scope: unit configures an IP firewall, but not running as root.
Dec 09 11:32:45 np0005551750.novalocal systemd[4303]: (This warning is only shown for the first unit using IP firewalling.)
Dec 09 11:32:45 np0005551750.novalocal systemd[4303]: Started podman-14356.scope.
Dec 09 11:32:45 np0005551750.novalocal systemd[4303]: Started podman-pause-737f3775.scope.
Dec 09 11:32:46 np0005551750.novalocal sudo[14986]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjfutgyobetvgdkftkahhbmcsbxacrkx ; /usr/bin/python3'
Dec 09 11:32:46 np0005551750.novalocal sudo[14986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:32:46 np0005551750.novalocal python3[14997]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.107:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.107:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:32:46 np0005551750.novalocal python3[14997]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 09 11:32:47 np0005551750.novalocal sudo[14986]: pam_unix(sudo:session): session closed for user root
Dec 09 11:32:47 np0005551750.novalocal sshd-session[8070]: Connection closed by 38.102.83.114 port 50672
Dec 09 11:32:47 np0005551750.novalocal sshd-session[8067]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:32:47 np0005551750.novalocal systemd-logind[799]: Session 5 logged out. Waiting for processes to exit.
Dec 09 11:32:47 np0005551750.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 09 11:32:47 np0005551750.novalocal systemd[1]: session-5.scope: Consumed 1min 15.421s CPU time.
Dec 09 11:32:47 np0005551750.novalocal systemd-logind[799]: Removed session 5.
Dec 09 11:33:08 np0005551750.novalocal sshd-session[22275]: Unable to negotiate with 38.102.83.236 port 56194: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 09 11:33:08 np0005551750.novalocal sshd-session[22276]: Unable to negotiate with 38.102.83.236 port 56214: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 09 11:33:08 np0005551750.novalocal sshd-session[22272]: Connection closed by 38.102.83.236 port 56184 [preauth]
Dec 09 11:33:08 np0005551750.novalocal sshd-session[22279]: Connection closed by 38.102.83.236 port 56190 [preauth]
Dec 09 11:33:08 np0005551750.novalocal sshd-session[22277]: Unable to negotiate with 38.102.83.236 port 56204: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 09 11:33:13 np0005551750.novalocal sshd-session[23899]: Accepted publickey for zuul from 38.102.83.114 port 51734 ssh2: RSA SHA256:6Ie4ZXK9Ek36UC2sJEF3TJKSrACzyJGKSwiteASgUXs
Dec 09 11:33:13 np0005551750.novalocal systemd-logind[799]: New session 6 of user zuul.
Dec 09 11:33:13 np0005551750.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 09 11:33:13 np0005551750.novalocal sshd-session[23899]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:33:13 np0005551750.novalocal python3[24011]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCjseURZJS89/lbhHxl8Le8OCYSlsMfc+hlaKS/UMei6M1xlhCofNSNA1o+RMZApygYkq0kwwJrggtCeYouUpHo= zuul@np0005551749.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:33:13 np0005551750.novalocal sudo[24204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egotkcvsicfylsroyvtpcuiolmytjuzm ; /usr/bin/python3'
Dec 09 11:33:13 np0005551750.novalocal sudo[24204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:33:14 np0005551750.novalocal python3[24213]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCjseURZJS89/lbhHxl8Le8OCYSlsMfc+hlaKS/UMei6M1xlhCofNSNA1o+RMZApygYkq0kwwJrggtCeYouUpHo= zuul@np0005551749.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:33:14 np0005551750.novalocal sudo[24204]: pam_unix(sudo:session): session closed for user root
Dec 09 11:33:15 np0005551750.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Dec 09 11:33:15 np0005551750.novalocal sudo[24624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhlmltxqzskcegbekynmviljqecpyqkc ; /usr/bin/python3'
Dec 09 11:33:15 np0005551750.novalocal sudo[24624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:33:15 np0005551750.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 09 11:33:15 np0005551750.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Dec 09 11:33:15 np0005551750.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 09 11:33:15 np0005551750.novalocal python3[24647]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005551750.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 09 11:33:15 np0005551750.novalocal useradd[24724]: new group: name=cloud-admin, GID=1002
Dec 09 11:33:15 np0005551750.novalocal useradd[24724]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 09 11:33:15 np0005551750.novalocal sudo[24624]: pam_unix(sudo:session): session closed for user root
Dec 09 11:33:15 np0005551750.novalocal sudo[24847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsthsofxwgxsbnvzyizaaqjnwdxevknf ; /usr/bin/python3'
Dec 09 11:33:15 np0005551750.novalocal sudo[24847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:33:15 np0005551750.novalocal python3[24857]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCjseURZJS89/lbhHxl8Le8OCYSlsMfc+hlaKS/UMei6M1xlhCofNSNA1o+RMZApygYkq0kwwJrggtCeYouUpHo= zuul@np0005551749.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 11:33:15 np0005551750.novalocal sudo[24847]: pam_unix(sudo:session): session closed for user root
Dec 09 11:33:15 np0005551750.novalocal sudo[25141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkfzdtamgogfixkmzivoxizjnnhkyom ; /usr/bin/python3'
Dec 09 11:33:15 np0005551750.novalocal sudo[25141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:33:16 np0005551750.novalocal python3[25150]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:33:16 np0005551750.novalocal sudo[25141]: pam_unix(sudo:session): session closed for user root
Dec 09 11:33:16 np0005551750.novalocal sudo[25411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwrluuowpjerwrdmwyuxfprgwrkhiinx ; /usr/bin/python3'
Dec 09 11:33:16 np0005551750.novalocal sudo[25411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:33:16 np0005551750.novalocal python3[25422]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765279995.8664079-167-160709791379132/source _original_basename=tmpehqmvjr7 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:33:16 np0005551750.novalocal sudo[25411]: pam_unix(sudo:session): session closed for user root
Dec 09 11:33:16 np0005551750.novalocal sshd-session[24605]: Connection closed by 87.236.176.76 port 40975
Dec 09 11:33:17 np0005551750.novalocal sudo[25680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjogmfglmcjjxbnlaztdpnfknsrjmukp ; /usr/bin/python3'
Dec 09 11:33:17 np0005551750.novalocal sudo[25680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:33:17 np0005551750.novalocal sshd-session[25602]: Connection closed by 87.236.176.76 port 53167 [preauth]
Dec 09 11:33:17 np0005551750.novalocal python3[25689]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 09 11:33:17 np0005551750.novalocal systemd[1]: Starting Hostname Service...
Dec 09 11:33:17 np0005551750.novalocal systemd[1]: Started Hostname Service.
Dec 09 11:33:17 np0005551750.novalocal systemd-hostnamed[25778]: Changed pretty hostname to 'compute-0'
Dec 09 11:33:17 compute-0 systemd-hostnamed[25778]: Hostname set to <compute-0> (static)
Dec 09 11:33:17 compute-0 NetworkManager[7193]: <info>  [1765279997.6712] hostname: static hostname changed from "np0005551750.novalocal" to "compute-0"
Dec 09 11:33:17 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 11:33:17 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 11:33:17 compute-0 sudo[25680]: pam_unix(sudo:session): session closed for user root
Dec 09 11:33:18 compute-0 sshd-session[23952]: Connection closed by 38.102.83.114 port 51734
Dec 09 11:33:18 compute-0 sshd-session[23899]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:33:18 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 09 11:33:18 compute-0 systemd[1]: session-6.scope: Consumed 2.343s CPU time.
Dec 09 11:33:18 compute-0 systemd-logind[799]: Session 6 logged out. Waiting for processes to exit.
Dec 09 11:33:18 compute-0 systemd-logind[799]: Removed session 6.
Dec 09 11:33:27 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 11:33:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 11:33:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 11:33:31 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 3.640s CPU time.
Dec 09 11:33:31 compute-0 systemd[1]: run-r293f6c51edf04273bd5dcdf8763bd40b.service: Deactivated successfully.
Dec 09 11:33:47 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 11:37:03 compute-0 systemd[1]: Starting dnf makecache...
Dec 09 11:37:03 compute-0 dnf[29964]: Failed determining last makecache time.
Dec 09 11:37:04 compute-0 dnf[29964]: CentOS Stream 9 - BaseOS                         54 kB/s | 6.1 kB     00:00
Dec 09 11:37:04 compute-0 dnf[29964]: CentOS Stream 9 - AppStream                      56 kB/s | 6.5 kB     00:00
Dec 09 11:37:04 compute-0 dnf[29964]: CentOS Stream 9 - CRB                            26 kB/s | 6.0 kB     00:00
Dec 09 11:37:04 compute-0 dnf[29964]: CentOS Stream 9 - Extras packages                66 kB/s | 8.3 kB     00:00
Dec 09 11:37:04 compute-0 dnf[29964]: Metadata cache created.
Dec 09 11:37:05 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 09 11:37:05 compute-0 systemd[1]: Finished dnf makecache.
Dec 09 11:38:49 compute-0 sshd-session[29970]: Accepted publickey for zuul from 38.102.83.236 port 54086 ssh2: RSA SHA256:6Ie4ZXK9Ek36UC2sJEF3TJKSrACzyJGKSwiteASgUXs
Dec 09 11:38:49 compute-0 systemd-logind[799]: New session 7 of user zuul.
Dec 09 11:38:49 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 09 11:38:49 compute-0 sshd-session[29970]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:38:49 compute-0 python3[30046]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:38:51 compute-0 sudo[30160]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvbzrktytpdenlbziwqvveowdgyqzgak ; /usr/bin/python3'
Dec 09 11:38:51 compute-0 sudo[30160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:51 compute-0 python3[30162]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:38:51 compute-0 sudo[30160]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:52 compute-0 sudo[30233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgygisgokscquyajryiycbjujnrtiycz ; /usr/bin/python3'
Dec 09 11:38:52 compute-0 sudo[30233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:52 compute-0 python3[30235]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765280331.5370102-34029-232113271321164/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:38:52 compute-0 sudo[30233]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:52 compute-0 sudo[30259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnfndooyqpvyjyiqbbmsztshedzeiah ; /usr/bin/python3'
Dec 09 11:38:52 compute-0 sudo[30259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:52 compute-0 python3[30261]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:38:52 compute-0 sudo[30259]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:52 compute-0 sudo[30332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzvtdsebaxurlkhbieuqizfcqoglevkh ; /usr/bin/python3'
Dec 09 11:38:52 compute-0 sudo[30332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:53 compute-0 python3[30334]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765280331.5370102-34029-232113271321164/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:38:53 compute-0 sudo[30332]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:53 compute-0 sudo[30358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvwzcbgghhuadocmamvqkobkqswixozk ; /usr/bin/python3'
Dec 09 11:38:53 compute-0 sudo[30358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:53 compute-0 python3[30360]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:38:53 compute-0 sudo[30358]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:53 compute-0 sudo[30431]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkdqppenltxxmebygwlehiczykgdpzpm ; /usr/bin/python3'
Dec 09 11:38:53 compute-0 sudo[30431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:53 compute-0 python3[30433]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765280331.5370102-34029-232113271321164/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:38:53 compute-0 sudo[30431]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:53 compute-0 sudo[30457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phivadfekgcwjenqtyqycufbxvbthqzk ; /usr/bin/python3'
Dec 09 11:38:53 compute-0 sudo[30457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:53 compute-0 python3[30459]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:38:53 compute-0 sudo[30457]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:54 compute-0 sudo[30530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyhriwebjtidslagujxjzuefigxvwnea ; /usr/bin/python3'
Dec 09 11:38:54 compute-0 sudo[30530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:54 compute-0 python3[30532]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765280331.5370102-34029-232113271321164/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:38:54 compute-0 sudo[30530]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:54 compute-0 sudo[30556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nluvlbgxdjoreqhkzerejmpvsqvalhhd ; /usr/bin/python3'
Dec 09 11:38:54 compute-0 sudo[30556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:54 compute-0 python3[30558]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:38:54 compute-0 sudo[30556]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:54 compute-0 sudo[30629]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnzjinprtmzmtezdmnbvhoffbvvtxaks ; /usr/bin/python3'
Dec 09 11:38:54 compute-0 sudo[30629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:54 compute-0 python3[30631]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765280331.5370102-34029-232113271321164/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:38:54 compute-0 sudo[30629]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:54 compute-0 sudo[30655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxjakgfpgjrjszagznqpxvwblyngfobb ; /usr/bin/python3'
Dec 09 11:38:54 compute-0 sudo[30655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:55 compute-0 python3[30657]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:38:55 compute-0 sudo[30655]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:55 compute-0 sudo[30728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubggsvezsfxbnnbwxkyawjeogutfjjwv ; /usr/bin/python3'
Dec 09 11:38:55 compute-0 sudo[30728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:55 compute-0 python3[30730]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765280331.5370102-34029-232113271321164/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:38:55 compute-0 sudo[30728]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:55 compute-0 sudo[30754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmnnzpuforfzvmbuktckejysrwecjlrx ; /usr/bin/python3'
Dec 09 11:38:55 compute-0 sudo[30754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:55 compute-0 python3[30756]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 11:38:55 compute-0 sudo[30754]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:55 compute-0 sudo[30827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xawxhnefmejouypbsgajtrnvfsgmsapg ; /usr/bin/python3'
Dec 09 11:38:55 compute-0 sudo[30827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:38:56 compute-0 python3[30829]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765280331.5370102-34029-232113271321164/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:38:56 compute-0 sudo[30827]: pam_unix(sudo:session): session closed for user root
Dec 09 11:38:59 compute-0 sshd-session[30854]: Unable to negotiate with 192.168.122.11 port 37694: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 09 11:38:59 compute-0 sshd-session[30857]: Connection closed by 192.168.122.11 port 37670 [preauth]
Dec 09 11:38:59 compute-0 sshd-session[30858]: Connection closed by 192.168.122.11 port 37686 [preauth]
Dec 09 11:38:59 compute-0 sshd-session[30855]: Unable to negotiate with 192.168.122.11 port 37704: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 09 11:38:59 compute-0 sshd-session[30856]: Unable to negotiate with 192.168.122.11 port 37688: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 09 11:39:07 compute-0 sshd-session[30864]: Invalid user user from 78.128.112.74 port 48408
Dec 09 11:39:08 compute-0 sshd-session[30864]: Connection closed by invalid user user 78.128.112.74 port 48408 [preauth]
Dec 09 11:39:10 compute-0 python3[30889]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:44:10 compute-0 sshd-session[29973]: Received disconnect from 38.102.83.236 port 54086:11: disconnected by user
Dec 09 11:44:10 compute-0 sshd-session[29973]: Disconnected from user zuul 38.102.83.236 port 54086
Dec 09 11:44:10 compute-0 sshd-session[29970]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:44:10 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 09 11:44:10 compute-0 systemd[1]: session-7.scope: Consumed 5.072s CPU time.
Dec 09 11:44:10 compute-0 systemd-logind[799]: Session 7 logged out. Waiting for processes to exit.
Dec 09 11:44:10 compute-0 systemd-logind[799]: Removed session 7.
Dec 09 11:45:39 compute-0 sshd-session[30894]: Received disconnect from 193.46.255.99 port 49216:11:  [preauth]
Dec 09 11:45:39 compute-0 sshd-session[30894]: Disconnected from authenticating user root 193.46.255.99 port 49216 [preauth]
Dec 09 11:51:57 compute-0 sshd-session[30898]: Accepted publickey for zuul from 192.168.122.30 port 38366 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:51:57 compute-0 systemd-logind[799]: New session 8 of user zuul.
Dec 09 11:51:57 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 09 11:51:57 compute-0 sshd-session[30898]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:51:58 compute-0 python3.9[31051]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:51:59 compute-0 sudo[31230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clvsdqqkudowbnbjxrsxqfvzjuruekns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281119.0043771-56-233159130609635/AnsiballZ_command.py'
Dec 09 11:51:59 compute-0 sudo[31230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:51:59 compute-0 python3.9[31232]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:52:08 compute-0 sudo[31230]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:08 compute-0 sshd-session[30901]: Connection closed by 192.168.122.30 port 38366
Dec 09 11:52:08 compute-0 sshd-session[30898]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:52:08 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 09 11:52:08 compute-0 systemd[1]: session-8.scope: Consumed 8.444s CPU time.
Dec 09 11:52:08 compute-0 systemd-logind[799]: Session 8 logged out. Waiting for processes to exit.
Dec 09 11:52:08 compute-0 systemd-logind[799]: Removed session 8.
Dec 09 11:52:23 compute-0 sshd-session[31290]: Accepted publickey for zuul from 192.168.122.30 port 34450 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:52:23 compute-0 systemd-logind[799]: New session 9 of user zuul.
Dec 09 11:52:23 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 09 11:52:23 compute-0 sshd-session[31290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:52:24 compute-0 python3.9[31443]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 09 11:52:25 compute-0 python3.9[31617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:52:26 compute-0 sudo[31767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfnowzrwuieoeizbswgzucowgwjjvite ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281146.1691132-93-1229498669234/AnsiballZ_command.py'
Dec 09 11:52:26 compute-0 sudo[31767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:26 compute-0 python3.9[31769]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:52:26 compute-0 sudo[31767]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:27 compute-0 sudo[31920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wipflnawgirvzfkypsmsfulymvfkkfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281147.196098-129-83938094158245/AnsiballZ_stat.py'
Dec 09 11:52:27 compute-0 sudo[31920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:27 compute-0 python3.9[31922]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:52:27 compute-0 sudo[31920]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:28 compute-0 sudo[32072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srytxbuluwgxrgdlbbqynqqvknheeaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281147.9913242-153-226552651188253/AnsiballZ_file.py'
Dec 09 11:52:28 compute-0 sudo[32072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:28 compute-0 python3.9[32074]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:52:28 compute-0 sudo[32072]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:29 compute-0 sudo[32224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdwkcxicbtkflgjtupqwcecoozesluw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281148.7809963-177-20442242758173/AnsiballZ_stat.py'
Dec 09 11:52:29 compute-0 sudo[32224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:29 compute-0 python3.9[32226]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:52:29 compute-0 sudo[32224]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:29 compute-0 sudo[32347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dazivvsxxfmywhbdjsfgbxmkaokivica ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281148.7809963-177-20442242758173/AnsiballZ_copy.py'
Dec 09 11:52:29 compute-0 sudo[32347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:29 compute-0 python3.9[32349]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281148.7809963-177-20442242758173/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:52:29 compute-0 sudo[32347]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:30 compute-0 sudo[32499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buifzbyphmfswoskwfgsopdmezrjqmxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281150.114737-222-96638659237575/AnsiballZ_setup.py'
Dec 09 11:52:30 compute-0 sudo[32499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:30 compute-0 python3.9[32501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:52:30 compute-0 sudo[32499]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:31 compute-0 sudo[32655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coydqcuztqvzfqfecezebreeaagctrgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281151.1182446-246-120185883385134/AnsiballZ_file.py'
Dec 09 11:52:31 compute-0 sudo[32655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:31 compute-0 python3.9[32657]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:52:31 compute-0 sudo[32655]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:32 compute-0 sudo[32807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtwihxiytaomkolzgmlakrlyagvhhxio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281151.7822804-273-48085014815668/AnsiballZ_file.py'
Dec 09 11:52:32 compute-0 sudo[32807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:32 compute-0 python3.9[32809]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:52:32 compute-0 sudo[32807]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:33 compute-0 python3.9[32959]: ansible-ansible.builtin.service_facts Invoked
Dec 09 11:52:36 compute-0 python3.9[33212]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:52:37 compute-0 python3.9[33362]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:52:38 compute-0 python3.9[33516]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:52:39 compute-0 sudo[33672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrnzlpukrvmvyzvyeybzdpqknjudboxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281159.020361-417-80617632866645/AnsiballZ_setup.py'
Dec 09 11:52:39 compute-0 sudo[33672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:39 compute-0 python3.9[33674]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 11:52:39 compute-0 sudo[33672]: pam_unix(sudo:session): session closed for user root
Dec 09 11:52:40 compute-0 sudo[33756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhrmxjnzyuzrexswromvookoaynhyztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281159.020361-417-80617632866645/AnsiballZ_dnf.py'
Dec 09 11:52:40 compute-0 sudo[33756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:52:40 compute-0 python3.9[33758]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:53:40 compute-0 systemd[1]: Reloading.
Dec 09 11:53:41 compute-0 systemd-rc-local-generator[33957]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:53:41 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 09 11:53:41 compute-0 systemd[1]: Reloading.
Dec 09 11:53:41 compute-0 systemd-rc-local-generator[33998]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:53:41 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 09 11:53:41 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 09 11:53:41 compute-0 systemd[1]: Reloading.
Dec 09 11:53:41 compute-0 systemd-rc-local-generator[34036]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:53:41 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 09 11:53:42 compute-0 dbus-broker-launch[776]: Noticed file-system modification, trigger reload.
Dec 09 11:53:42 compute-0 dbus-broker-launch[776]: Noticed file-system modification, trigger reload.
Dec 09 11:54:53 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Dec 09 11:54:53 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 11:54:53 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 11:54:53 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 11:54:53 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 11:54:53 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 11:54:53 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 11:54:53 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 11:54:53 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 09 11:54:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 11:54:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 11:54:53 compute-0 systemd[1]: Reloading.
Dec 09 11:54:53 compute-0 systemd-rc-local-generator[34363]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:54:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 11:54:54 compute-0 sudo[33756]: pam_unix(sudo:session): session closed for user root
Dec 09 11:54:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 11:54:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 11:54:55 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.327s CPU time.
Dec 09 11:54:55 compute-0 systemd[1]: run-r6e847b350b05473d8afea1086b90346a.service: Deactivated successfully.
Dec 09 11:54:55 compute-0 sudo[35273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vevlphxyyqmjpondxynuelnmabfbexht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281294.8867154-453-261898453190069/AnsiballZ_command.py'
Dec 09 11:54:55 compute-0 sudo[35273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:54:55 compute-0 python3.9[35275]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:54:56 compute-0 sudo[35273]: pam_unix(sudo:session): session closed for user root
Dec 09 11:54:57 compute-0 sudo[35554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnuxhcmufmmhsntlbjiyiisbdwgqbkve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281296.5803275-477-187075600641790/AnsiballZ_selinux.py'
Dec 09 11:54:57 compute-0 sudo[35554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:54:57 compute-0 python3.9[35556]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 09 11:54:57 compute-0 sudo[35554]: pam_unix(sudo:session): session closed for user root
Dec 09 11:54:58 compute-0 sudo[35706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nltkiniqbbfxefbtnbmgwxshtrhiwbpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281297.9991515-510-1572536901831/AnsiballZ_command.py'
Dec 09 11:54:58 compute-0 sudo[35706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:54:58 compute-0 python3.9[35708]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 09 11:55:01 compute-0 sudo[35706]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:02 compute-0 sudo[35860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnwhbnilxbeeaarhqibrgtfisdynhmvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281301.9230826-534-258298424769808/AnsiballZ_file.py'
Dec 09 11:55:02 compute-0 sudo[35860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:02 compute-0 python3.9[35862]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:55:02 compute-0 sudo[35860]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:03 compute-0 sudo[36012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqyfksftomxwaqwpsmjekrkybwfobogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281303.0688202-558-93956185336661/AnsiballZ_mount.py'
Dec 09 11:55:03 compute-0 sudo[36012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:04 compute-0 python3.9[36014]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 09 11:55:04 compute-0 sudo[36012]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:05 compute-0 sudo[36164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynviwbbavqvrhmaqaanlbgsvmhfibnop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281305.0749254-642-128787540984356/AnsiballZ_file.py'
Dec 09 11:55:05 compute-0 sudo[36164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:05 compute-0 python3.9[36166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:55:05 compute-0 sudo[36164]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:08 compute-0 sudo[36316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llfqkieahokddchocdllzltqwmehcasn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281308.6017416-666-85133567690097/AnsiballZ_stat.py'
Dec 09 11:55:08 compute-0 sudo[36316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:10 compute-0 python3.9[36318]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:55:10 compute-0 sudo[36316]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:10 compute-0 sudo[36439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btevwhwtzujrajmvpfykpsdhwjkhseyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281308.6017416-666-85133567690097/AnsiballZ_copy.py'
Dec 09 11:55:10 compute-0 sudo[36439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:11 compute-0 python3.9[36441]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281308.6017416-666-85133567690097/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=076c443a96323a89dd3bf198a62fc83d8f8af357 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:55:11 compute-0 sudo[36439]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:13 compute-0 sudo[36591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpvwtfjqieixbaflygfxudknbgiadxcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281313.72329-738-62233417692697/AnsiballZ_stat.py'
Dec 09 11:55:13 compute-0 sudo[36591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:14 compute-0 python3.9[36593]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:55:14 compute-0 sudo[36591]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:14 compute-0 sudo[36743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxxgjkwbtsjihueogongghnkltnidnmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281314.449929-762-18145213940055/AnsiballZ_command.py'
Dec 09 11:55:14 compute-0 sudo[36743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:14 compute-0 python3.9[36745]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:55:15 compute-0 sudo[36743]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:15 compute-0 sudo[36896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnidwffwxhlzzmnzmycaemlscxhqpcrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281315.1903982-786-128527449738373/AnsiballZ_file.py'
Dec 09 11:55:15 compute-0 sudo[36896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:15 compute-0 python3.9[36898]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:55:15 compute-0 sudo[36896]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:16 compute-0 sudo[37048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwncxkzwndymuqvytdrmupwuruziubeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281316.300014-819-165215044659631/AnsiballZ_getent.py'
Dec 09 11:55:16 compute-0 sudo[37048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:16 compute-0 python3.9[37050]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 09 11:55:16 compute-0 sudo[37048]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:16 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 11:55:17 compute-0 sudo[37202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zudctihzctruaumulyukgavsmuenewhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281317.1549015-843-108023165733032/AnsiballZ_group.py'
Dec 09 11:55:17 compute-0 sudo[37202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:17 compute-0 python3.9[37204]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 11:55:17 compute-0 groupadd[37205]: group added to /etc/group: name=qemu, GID=107
Dec 09 11:55:17 compute-0 groupadd[37205]: group added to /etc/gshadow: name=qemu
Dec 09 11:55:17 compute-0 groupadd[37205]: new group: name=qemu, GID=107
Dec 09 11:55:17 compute-0 sudo[37202]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:18 compute-0 sudo[37360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpqznedbheuujdhfgjikehzuqmbtvuqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281318.1321259-867-264500538614442/AnsiballZ_user.py'
Dec 09 11:55:18 compute-0 sudo[37360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:18 compute-0 python3.9[37362]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 11:55:19 compute-0 useradd[37364]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 11:55:19 compute-0 sudo[37360]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:19 compute-0 sudo[37520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kstqmppgottwjaksegggmwjogklbmroj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281319.2785382-891-254708804724230/AnsiballZ_getent.py'
Dec 09 11:55:19 compute-0 sudo[37520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:19 compute-0 python3.9[37522]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 09 11:55:19 compute-0 sudo[37520]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:20 compute-0 sudo[37673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymmsqfumoiqwyomtjanmjjbhdyxissfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281319.9618816-915-184590818224035/AnsiballZ_group.py'
Dec 09 11:55:20 compute-0 sudo[37673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:20 compute-0 python3.9[37675]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 11:55:20 compute-0 groupadd[37676]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 09 11:55:20 compute-0 groupadd[37676]: group added to /etc/gshadow: name=hugetlbfs
Dec 09 11:55:20 compute-0 groupadd[37676]: new group: name=hugetlbfs, GID=42477
Dec 09 11:55:20 compute-0 sudo[37673]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:21 compute-0 sudo[37831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqefgvzdqjnmpkhoqvynftklwyzksigb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281320.8569238-942-70692530185183/AnsiballZ_file.py'
Dec 09 11:55:21 compute-0 sudo[37831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:21 compute-0 python3.9[37833]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 09 11:55:21 compute-0 sudo[37831]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:22 compute-0 sudo[37983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozefnnfgjwpmtyevpzcjsyyziixdpihp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281321.8310483-975-159586670942360/AnsiballZ_dnf.py'
Dec 09 11:55:22 compute-0 sudo[37983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:22 compute-0 python3.9[37985]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:55:25 compute-0 sudo[37983]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:25 compute-0 sudo[38136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubzpfidruglkqeprktcaklvefegutvwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281325.6238372-999-197988101169500/AnsiballZ_file.py'
Dec 09 11:55:25 compute-0 sudo[38136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:26 compute-0 python3.9[38138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:55:26 compute-0 sudo[38136]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:26 compute-0 sudo[38288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjgghokupnqmnwsmfydhbdxkopjusrbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281326.31687-1023-14244425586257/AnsiballZ_stat.py'
Dec 09 11:55:26 compute-0 sudo[38288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:26 compute-0 python3.9[38290]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:55:26 compute-0 sudo[38288]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:27 compute-0 sudo[38411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbiucedvuglvtdatbzkrgogvxmobuegv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281326.31687-1023-14244425586257/AnsiballZ_copy.py'
Dec 09 11:55:27 compute-0 sudo[38411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:27 compute-0 python3.9[38413]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765281326.31687-1023-14244425586257/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:55:27 compute-0 sudo[38411]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:28 compute-0 sudo[38563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdznaajpxcsgrmumibkdxnsjtmbfjqbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281327.6037796-1068-266958745675073/AnsiballZ_systemd.py'
Dec 09 11:55:28 compute-0 sudo[38563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:28 compute-0 python3.9[38565]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 11:55:28 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 09 11:55:28 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 09 11:55:28 compute-0 kernel: Bridge firewalling registered
Dec 09 11:55:28 compute-0 systemd-modules-load[38569]: Inserted module 'br_netfilter'
Dec 09 11:55:28 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 09 11:55:28 compute-0 sudo[38563]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:29 compute-0 sudo[38723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olpurqnpeiezllkrrqerhcfgbtssehjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281328.9199824-1092-169610245375338/AnsiballZ_stat.py'
Dec 09 11:55:29 compute-0 sudo[38723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:29 compute-0 python3.9[38725]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:55:29 compute-0 sudo[38723]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:29 compute-0 sudo[38846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juxqcczapkdqtnktnfrbkdmbdxbmjjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281328.9199824-1092-169610245375338/AnsiballZ_copy.py'
Dec 09 11:55:29 compute-0 sudo[38846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:30 compute-0 python3.9[38848]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765281328.9199824-1092-169610245375338/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:55:30 compute-0 sudo[38846]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:30 compute-0 sudo[38998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ondxuvbklzniwscwuiwvrmqwlaydotgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281330.6044621-1146-51261135996697/AnsiballZ_dnf.py'
Dec 09 11:55:30 compute-0 sudo[38998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:31 compute-0 python3.9[39000]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:55:35 compute-0 dbus-broker-launch[776]: Noticed file-system modification, trigger reload.
Dec 09 11:55:35 compute-0 dbus-broker-launch[776]: Noticed file-system modification, trigger reload.
Dec 09 11:55:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 11:55:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 11:55:35 compute-0 systemd[1]: Reloading.
Dec 09 11:55:35 compute-0 systemd-rc-local-generator[39059]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:55:35 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 11:55:36 compute-0 sudo[38998]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:37 compute-0 python3.9[40581]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:55:38 compute-0 python3.9[41505]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 09 11:55:39 compute-0 python3.9[42235]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:55:39 compute-0 sudo[43109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpauekaxoebvsrkfabfabzraqvmmxeov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281339.4796162-1263-209266914812111/AnsiballZ_command.py'
Dec 09 11:55:39 compute-0 sudo[43109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:39 compute-0 python3.9[43119]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:55:40 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 09 11:55:40 compute-0 systemd[1]: Starting Authorization Manager...
Dec 09 11:55:40 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 09 11:55:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 11:55:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 11:55:40 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.580s CPU time.
Dec 09 11:55:40 compute-0 systemd[1]: run-r6ca26013fb464441a7d63a1f3bdd22bf.service: Deactivated successfully.
Dec 09 11:55:40 compute-0 polkitd[43420]: Started polkitd version 0.117
Dec 09 11:55:40 compute-0 polkitd[43420]: Loading rules from directory /etc/polkit-1/rules.d
Dec 09 11:55:40 compute-0 polkitd[43420]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 09 11:55:40 compute-0 polkitd[43420]: Finished loading, compiling and executing 2 rules
Dec 09 11:55:40 compute-0 polkitd[43420]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 09 11:55:40 compute-0 systemd[1]: Started Authorization Manager.
Dec 09 11:55:40 compute-0 sudo[43109]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:41 compute-0 sudo[43589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymmnkookkiqhvtqnzagwqgcgssgndayg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281340.899129-1290-18451462809081/AnsiballZ_systemd.py'
Dec 09 11:55:41 compute-0 sudo[43589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:41 compute-0 python3.9[43591]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:55:41 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 09 11:55:41 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 09 11:55:41 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 09 11:55:41 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 09 11:55:41 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 09 11:55:41 compute-0 sudo[43589]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:42 compute-0 python3.9[43752]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 09 11:55:45 compute-0 sudo[43902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsomxkmxmmzbisoesjvvcjbcikyyufww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281345.4656758-1461-59068264543574/AnsiballZ_systemd.py'
Dec 09 11:55:45 compute-0 sudo[43902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:46 compute-0 python3.9[43904]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:55:46 compute-0 systemd[1]: Reloading.
Dec 09 11:55:46 compute-0 systemd-rc-local-generator[43933]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:55:46 compute-0 sudo[43902]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:46 compute-0 sudo[44092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmyypomvhaqtwyrvvpdymatpbalfculk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281346.4977434-1461-198809407824625/AnsiballZ_systemd.py'
Dec 09 11:55:46 compute-0 sudo[44092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:47 compute-0 python3.9[44094]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:55:47 compute-0 systemd[1]: Reloading.
Dec 09 11:55:47 compute-0 systemd-rc-local-generator[44122]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:55:47 compute-0 sudo[44092]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:47 compute-0 sudo[44281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yizajxxnvkhztimgntleptrxhgeawclc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281347.6796415-1509-262254080676914/AnsiballZ_command.py'
Dec 09 11:55:47 compute-0 sudo[44281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:48 compute-0 python3.9[44283]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:55:48 compute-0 sudo[44281]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:48 compute-0 sudo[44434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytgqaplejiesloqzkyezoghzttfzfmrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281348.3841302-1533-24960785650248/AnsiballZ_command.py'
Dec 09 11:55:48 compute-0 sudo[44434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:48 compute-0 python3.9[44436]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:55:48 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 09 11:55:48 compute-0 sudo[44434]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:49 compute-0 sudo[44587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azalpchcexfltpylsrpsbzbugyazrnmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281349.025723-1557-132663847735011/AnsiballZ_command.py'
Dec 09 11:55:49 compute-0 sudo[44587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:49 compute-0 python3.9[44589]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:55:50 compute-0 sudo[44587]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:51 compute-0 sudo[44749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuqjdpojbtmhyxlosbcifhsnwljqovrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281351.2853923-1581-257617530002383/AnsiballZ_command.py'
Dec 09 11:55:51 compute-0 sudo[44749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:51 compute-0 python3.9[44751]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:55:51 compute-0 sudo[44749]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:52 compute-0 sudo[44902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vytdfzxxoutuvvspbvlinhryrpeqwqca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281351.9783788-1605-190517934022562/AnsiballZ_systemd.py'
Dec 09 11:55:52 compute-0 sudo[44902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:55:52 compute-0 python3.9[44904]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 11:55:52 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 09 11:55:52 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 09 11:55:52 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 09 11:55:52 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 09 11:55:52 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 09 11:55:52 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 09 11:55:52 compute-0 sudo[44902]: pam_unix(sudo:session): session closed for user root
Dec 09 11:55:53 compute-0 sshd-session[31293]: Connection closed by 192.168.122.30 port 34450
Dec 09 11:55:53 compute-0 sshd-session[31290]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:55:53 compute-0 systemd-logind[799]: Session 9 logged out. Waiting for processes to exit.
Dec 09 11:55:53 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 09 11:55:53 compute-0 systemd[1]: session-9.scope: Consumed 2min 36.511s CPU time.
Dec 09 11:55:53 compute-0 systemd-logind[799]: Removed session 9.
Dec 09 11:55:58 compute-0 sshd-session[44934]: Accepted publickey for zuul from 192.168.122.30 port 36134 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:55:58 compute-0 systemd-logind[799]: New session 10 of user zuul.
Dec 09 11:55:58 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 09 11:55:58 compute-0 sshd-session[44934]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:55:59 compute-0 python3.9[45087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:56:00 compute-0 sudo[45241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msmtynootrkrnynexpypwdlgqioeljut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281360.0180452-68-170110117747203/AnsiballZ_getent.py'
Dec 09 11:56:00 compute-0 sudo[45241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:00 compute-0 python3.9[45243]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 09 11:56:00 compute-0 sudo[45241]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:01 compute-0 sudo[45394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kljgiqeyaviirfarxccskxldpvpdqeff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281360.8553407-92-34381182794613/AnsiballZ_group.py'
Dec 09 11:56:01 compute-0 sudo[45394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:01 compute-0 python3.9[45396]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 11:56:01 compute-0 groupadd[45397]: group added to /etc/group: name=openvswitch, GID=42476
Dec 09 11:56:01 compute-0 groupadd[45397]: group added to /etc/gshadow: name=openvswitch
Dec 09 11:56:01 compute-0 groupadd[45397]: new group: name=openvswitch, GID=42476
Dec 09 11:56:01 compute-0 sudo[45394]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:02 compute-0 sudo[45552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojaojzppiefgfwdkddggrypcajsxspjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281361.7672853-116-225585086607869/AnsiballZ_user.py'
Dec 09 11:56:02 compute-0 sudo[45552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:02 compute-0 python3.9[45554]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 11:56:02 compute-0 useradd[45556]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 11:56:02 compute-0 useradd[45556]: add 'openvswitch' to group 'hugetlbfs'
Dec 09 11:56:02 compute-0 useradd[45556]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 09 11:56:02 compute-0 sudo[45552]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:03 compute-0 sudo[45712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxrflubcctwjivxbyqoiimoyypuplktk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281362.862339-146-204449434371746/AnsiballZ_setup.py'
Dec 09 11:56:03 compute-0 sudo[45712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:03 compute-0 python3.9[45714]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 11:56:03 compute-0 sudo[45712]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:04 compute-0 sudo[45796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjyfutoaxdpggutamwmbzrubcshbunwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281362.862339-146-204449434371746/AnsiballZ_dnf.py'
Dec 09 11:56:04 compute-0 sudo[45796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:04 compute-0 python3.9[45798]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 11:56:07 compute-0 sudo[45796]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:07 compute-0 sudo[45960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvhzwemzzirgmgwovgbvihyuqqqycovf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281367.453986-188-266546428226605/AnsiballZ_dnf.py'
Dec 09 11:56:07 compute-0 sudo[45960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:07 compute-0 python3.9[45962]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:56:22 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Dec 09 11:56:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 11:56:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 11:56:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 11:56:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 11:56:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 11:56:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 11:56:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 11:56:22 compute-0 groupadd[45985]: group added to /etc/group: name=unbound, GID=993
Dec 09 11:56:22 compute-0 groupadd[45985]: group added to /etc/gshadow: name=unbound
Dec 09 11:56:22 compute-0 groupadd[45985]: new group: name=unbound, GID=993
Dec 09 11:56:22 compute-0 useradd[45992]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 09 11:56:22 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 09 11:56:22 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 09 11:56:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 11:56:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 11:56:23 compute-0 systemd[1]: Reloading.
Dec 09 11:56:23 compute-0 systemd-rc-local-generator[46489]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:56:23 compute-0 systemd-sysv-generator[46494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:56:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 11:56:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 11:56:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 11:56:24 compute-0 systemd[1]: run-rbf5ad82134f648c4ad11afd1018e4b8c.service: Deactivated successfully.
Dec 09 11:56:24 compute-0 sudo[45960]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:25 compute-0 sudo[47059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltmvyqtrdguizmlxfswseqtrpznpuook ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281384.7950733-212-170625029812095/AnsiballZ_systemd.py'
Dec 09 11:56:25 compute-0 sudo[47059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:25 compute-0 python3.9[47061]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 11:56:25 compute-0 systemd[1]: Reloading.
Dec 09 11:56:25 compute-0 systemd-sysv-generator[47090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:56:25 compute-0 systemd-rc-local-generator[47087]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:56:26 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 09 11:56:26 compute-0 chown[47104]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 09 11:56:26 compute-0 ovs-ctl[47109]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 09 11:56:26 compute-0 ovs-ctl[47109]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 09 11:56:26 compute-0 ovs-ctl[47109]: Starting ovsdb-server [  OK  ]
Dec 09 11:56:26 compute-0 ovs-vsctl[47158]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 09 11:56:26 compute-0 ovs-vsctl[47178]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"8ca97f93-b2e6-431f-83fb-92735c787453\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 09 11:56:26 compute-0 ovs-ctl[47109]: Configuring Open vSwitch system IDs [  OK  ]
Dec 09 11:56:26 compute-0 ovs-ctl[47109]: Enabling remote OVSDB managers [  OK  ]
Dec 09 11:56:26 compute-0 ovs-vsctl[47184]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 09 11:56:26 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 09 11:56:26 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 09 11:56:26 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 09 11:56:26 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 09 11:56:26 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 09 11:56:26 compute-0 ovs-ctl[47229]: Inserting openvswitch module [  OK  ]
Dec 09 11:56:26 compute-0 ovs-ctl[47198]: Starting ovs-vswitchd [  OK  ]
Dec 09 11:56:26 compute-0 ovs-vsctl[47250]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 09 11:56:26 compute-0 ovs-ctl[47198]: Enabling remote OVSDB managers [  OK  ]
Dec 09 11:56:26 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 09 11:56:26 compute-0 systemd[1]: Starting Open vSwitch...
Dec 09 11:56:26 compute-0 systemd[1]: Finished Open vSwitch.
Dec 09 11:56:26 compute-0 sudo[47059]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:27 compute-0 python3.9[47401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:56:28 compute-0 sudo[47551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mujqdshdfclsjqvswkotgnqnfexckiny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281387.97775-266-213444735718206/AnsiballZ_sefcontext.py'
Dec 09 11:56:28 compute-0 sudo[47551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:28 compute-0 python3.9[47553]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 09 11:56:30 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Dec 09 11:56:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 11:56:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 11:56:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 11:56:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 11:56:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 11:56:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 11:56:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 11:56:30 compute-0 sudo[47551]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:31 compute-0 python3.9[47708]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:56:31 compute-0 sudo[47864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwampdbroofotbefaqqjmjbcdpjrjvvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281391.6906312-320-209350777814161/AnsiballZ_dnf.py'
Dec 09 11:56:31 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 09 11:56:31 compute-0 sudo[47864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:32 compute-0 python3.9[47866]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:56:33 compute-0 sudo[47864]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:34 compute-0 sudo[48017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqqtfhpibjathnpvlldpdyvmpmsodbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281394.1127837-344-271277790665001/AnsiballZ_command.py'
Dec 09 11:56:34 compute-0 sudo[48017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:34 compute-0 python3.9[48019]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:56:35 compute-0 sudo[48017]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:36 compute-0 sudo[48304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxtfzoomtgcofbltatsqupkrkndbjdjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281395.6939385-368-108841295103111/AnsiballZ_file.py'
Dec 09 11:56:36 compute-0 sudo[48304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:36 compute-0 python3.9[48306]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 09 11:56:36 compute-0 sudo[48304]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:37 compute-0 python3.9[48456]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:56:37 compute-0 sudo[48608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhqvbhjxtnqvqrtqacvdefxookeszscy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281397.346174-416-171273487850404/AnsiballZ_dnf.py'
Dec 09 11:56:37 compute-0 sudo[48608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:37 compute-0 python3.9[48610]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:56:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 11:56:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 11:56:40 compute-0 systemd[1]: Reloading.
Dec 09 11:56:40 compute-0 systemd-rc-local-generator[48650]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:56:40 compute-0 systemd-sysv-generator[48654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:56:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 11:56:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 11:56:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 11:56:40 compute-0 systemd[1]: run-r48fb8e782cf6498c924780d25cbe3484.service: Deactivated successfully.
Dec 09 11:56:41 compute-0 sudo[48608]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:42 compute-0 sudo[48926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilblbuhqjnudvzusssqcucuxshvczbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281401.7019806-440-73520681009905/AnsiballZ_systemd.py'
Dec 09 11:56:42 compute-0 sudo[48926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:42 compute-0 python3.9[48928]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 11:56:42 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 09 11:56:42 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 09 11:56:42 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 09 11:56:42 compute-0 systemd[1]: Stopping Network Manager...
Dec 09 11:56:42 compute-0 NetworkManager[7193]: <info>  [1765281402.3614] caught SIGTERM, shutting down normally.
Dec 09 11:56:42 compute-0 NetworkManager[7193]: <info>  [1765281402.3635] dhcp4 (eth0): canceled DHCP transaction
Dec 09 11:56:42 compute-0 NetworkManager[7193]: <info>  [1765281402.3635] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:56:42 compute-0 NetworkManager[7193]: <info>  [1765281402.3635] dhcp4 (eth0): state changed no lease
Dec 09 11:56:42 compute-0 NetworkManager[7193]: <info>  [1765281402.3638] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 11:56:42 compute-0 NetworkManager[7193]: <info>  [1765281402.3700] exiting (success)
Dec 09 11:56:42 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 11:56:42 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 11:56:42 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 09 11:56:42 compute-0 systemd[1]: Stopped Network Manager.
Dec 09 11:56:42 compute-0 systemd[1]: NetworkManager.service: Consumed 13.922s CPU time, 4.1M memory peak, read 0B from disk, written 28.5K to disk.
Dec 09 11:56:42 compute-0 systemd[1]: Starting Network Manager...
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.4320] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:3b8ce532-7834-4232-b208-67ea0773ffd0)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.4321] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.4386] manager[0x556c00306000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 11:56:42 compute-0 systemd[1]: Starting Hostname Service...
Dec 09 11:56:42 compute-0 systemd[1]: Started Hostname Service.
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5238] hostname: hostname: using hostnamed
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5240] hostname: static hostname changed from (none) to "compute-0"
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5245] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5250] manager[0x556c00306000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5251] manager[0x556c00306000]: rfkill: WWAN hardware radio set enabled
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5275] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5285] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5286] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5287] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5287] manager: Networking is enabled by state file
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5289] settings: Loaded settings plugin: keyfile (internal)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5293] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5320] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5330] dhcp: init: Using DHCP client 'internal'
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5332] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5338] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5343] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5353] device (lo): Activation: starting connection 'lo' (8ff964e8-13df-4b37-96bf-869f14ef83b9)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5360] device (eth0): carrier: link connected
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5365] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5370] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5370] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5376] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5382] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5389] device (eth1): carrier: link connected
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5392] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5398] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b7cdfc62-b3ac-5a41-99f8-23b040034403) (indicated)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5398] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5402] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5408] device (eth1): Activation: starting connection 'ci-private-network' (b7cdfc62-b3ac-5a41-99f8-23b040034403)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5414] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 11:56:42 compute-0 systemd[1]: Started Network Manager.
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5423] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5425] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5427] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5429] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5433] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5435] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5438] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5442] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5448] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5452] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5469] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5479] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5487] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5489] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5493] device (lo): Activation: successful, device activated.
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5500] dhcp4 (eth0): state changed new lease, address=38.102.83.98
Dec 09 11:56:42 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5507] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5576] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5583] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5585] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5590] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5592] device (eth1): Activation: successful, device activated.
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5616] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5617] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5620] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5622] device (eth0): Activation: successful, device activated.
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5626] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 11:56:42 compute-0 NetworkManager[48938]: <info>  [1765281402.5628] manager: startup complete
Dec 09 11:56:42 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 09 11:56:42 compute-0 sudo[48926]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:43 compute-0 sudo[49152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icyppkifrmkdbeinjcuawdfaehdxqfcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281402.8190475-464-246471466837762/AnsiballZ_dnf.py'
Dec 09 11:56:43 compute-0 sudo[49152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:43 compute-0 python3.9[49154]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:56:49 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 11:56:49 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 11:56:49 compute-0 systemd[1]: Reloading.
Dec 09 11:56:49 compute-0 systemd-sysv-generator[49205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:56:49 compute-0 systemd-rc-local-generator[49201]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:56:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 11:56:50 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 11:56:50 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 11:56:50 compute-0 systemd[1]: run-r77207ccccceb43a9bbeadf83f39b70ba.service: Deactivated successfully.
Dec 09 11:56:50 compute-0 sudo[49152]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:51 compute-0 sudo[49610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjkodzcizydafffplgeqeefyrgzmrgac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281410.8386724-500-94423339549827/AnsiballZ_stat.py'
Dec 09 11:56:51 compute-0 sudo[49610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:51 compute-0 python3.9[49612]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:56:51 compute-0 sudo[49610]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:52 compute-0 sudo[49762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zodqanhbvemhjcrjieqvtoknoejkpipd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281411.6009386-527-150722327981521/AnsiballZ_ini_file.py'
Dec 09 11:56:52 compute-0 sudo[49762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:52 compute-0 python3.9[49764]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:56:52 compute-0 sudo[49762]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:52 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 11:56:52 compute-0 sudo[49916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qayodifbbhbizsyywomgdfnpvnhsgfzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281412.5648847-557-161043037691751/AnsiballZ_ini_file.py'
Dec 09 11:56:52 compute-0 sudo[49916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:53 compute-0 python3.9[49918]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:56:53 compute-0 sudo[49916]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:53 compute-0 sudo[50068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njicnnaiiwdjzhjtfktnyhckyktrvbtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281413.2121406-557-166987013910562/AnsiballZ_ini_file.py'
Dec 09 11:56:53 compute-0 sudo[50068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:53 compute-0 python3.9[50070]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:56:53 compute-0 sudo[50068]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:54 compute-0 sudo[50220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muatppftjaditnqmhavmlkbairzzzjyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281413.9722471-602-66229957841152/AnsiballZ_ini_file.py'
Dec 09 11:56:54 compute-0 sudo[50220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:54 compute-0 python3.9[50222]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:56:54 compute-0 sudo[50220]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:54 compute-0 sudo[50372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvtdiivkhxkgmmbqmoybjrqtlkmtiniz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281414.6530755-602-66421053042349/AnsiballZ_ini_file.py'
Dec 09 11:56:54 compute-0 sudo[50372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:55 compute-0 python3.9[50374]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:56:55 compute-0 sudo[50372]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:55 compute-0 sudo[50524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czspamcyfwlwhxzibepzoeggrazvzrrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281415.2693958-647-257460898343688/AnsiballZ_stat.py'
Dec 09 11:56:55 compute-0 sudo[50524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:55 compute-0 python3.9[50526]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:56:55 compute-0 sudo[50524]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:56 compute-0 sudo[50647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycejlqeaklnikptflohjjwpcpdtltfrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281415.2693958-647-257460898343688/AnsiballZ_copy.py'
Dec 09 11:56:56 compute-0 sudo[50647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:56 compute-0 python3.9[50649]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281415.2693958-647-257460898343688/.source _original_basename=.o2by0su0 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:56:56 compute-0 sudo[50647]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:56 compute-0 sudo[50799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkiypocvpzzrlilpxtxawifxkgequkub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281416.7283397-692-2690608459988/AnsiballZ_file.py'
Dec 09 11:56:56 compute-0 sudo[50799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:57 compute-0 python3.9[50801]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:56:57 compute-0 sudo[50799]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:57 compute-0 sudo[50951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qejgajcqdqajrdhfjbjafedwwfygaeif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281417.380797-716-5619221296571/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 09 11:56:57 compute-0 sudo[50951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:57 compute-0 python3.9[50953]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 09 11:56:57 compute-0 sudo[50951]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:58 compute-0 sudo[51103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axnumfpratqljikxvfwxihlehfbljyti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281418.2906532-743-31376898677977/AnsiballZ_file.py'
Dec 09 11:56:58 compute-0 sudo[51103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:56:58 compute-0 python3.9[51105]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:56:58 compute-0 sudo[51103]: pam_unix(sudo:session): session closed for user root
Dec 09 11:56:59 compute-0 sudo[51255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtysfoibyqaetmznfvwjfzdacxbnjfgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281419.4701853-773-29918008822692/AnsiballZ_stat.py'
Dec 09 11:56:59 compute-0 sudo[51255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:00 compute-0 sudo[51255]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:00 compute-0 sudo[51378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzaircxmbkpuipstdmndwidljtfqoooo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281419.4701853-773-29918008822692/AnsiballZ_copy.py'
Dec 09 11:57:00 compute-0 sudo[51378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:00 compute-0 sudo[51378]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:01 compute-0 sudo[51530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pawhmpllppkksvmyxbplvecxrknvdlpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281420.8148103-818-207409143331332/AnsiballZ_slurp.py'
Dec 09 11:57:01 compute-0 sudo[51530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:01 compute-0 python3.9[51532]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 09 11:57:01 compute-0 sudo[51530]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:02 compute-0 sudo[51705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgshpagprxhcomfvhuzmndghslzloqx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281421.7306533-845-71063742014589/async_wrapper.py j180054939483 300 /home/zuul/.ansible/tmp/ansible-tmp-1765281421.7306533-845-71063742014589/AnsiballZ_edpm_os_net_config.py _'
Dec 09 11:57:02 compute-0 sudo[51705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:02 compute-0 ansible-async_wrapper.py[51707]: Invoked with j180054939483 300 /home/zuul/.ansible/tmp/ansible-tmp-1765281421.7306533-845-71063742014589/AnsiballZ_edpm_os_net_config.py _
Dec 09 11:57:02 compute-0 ansible-async_wrapper.py[51710]: Starting module and watcher
Dec 09 11:57:02 compute-0 ansible-async_wrapper.py[51710]: Start watching 51711 (300)
Dec 09 11:57:02 compute-0 ansible-async_wrapper.py[51711]: Start module (51711)
Dec 09 11:57:02 compute-0 ansible-async_wrapper.py[51707]: Return async_wrapper task started.
Dec 09 11:57:02 compute-0 sudo[51705]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:02 compute-0 python3.9[51712]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 09 11:57:03 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 09 11:57:03 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 09 11:57:03 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 09 11:57:03 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 09 11:57:03 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.6921] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.6937] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7461] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7462] audit: op="connection-add" uuid="7edaec59-6fe7-4252-bbba-d1a5c0d0ad3d" name="br-ex-br" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7478] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7479] audit: op="connection-add" uuid="d010ce4a-21e4-4d8a-81d3-38ff51512fd6" name="br-ex-port" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7491] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7492] audit: op="connection-add" uuid="b418d7c6-e5b7-4503-9316-f2bd6857e918" name="eth1-port" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7503] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7505] audit: op="connection-add" uuid="5fb0de4c-86b1-47c8-b4e8-5172814720a6" name="vlan20-port" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7515] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7517] audit: op="connection-add" uuid="357f1b85-9a54-4005-9a63-25a42c0f8219" name="vlan21-port" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7527] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7528] audit: op="connection-add" uuid="cc548417-5827-4671-991d-645c86b82754" name="vlan22-port" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7539] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7541] audit: op="connection-add" uuid="02615510-6d2f-43a1-805d-ad051f7c60e7" name="vlan23-port" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7563] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,connection.autoconnect-priority,connection.timestamp" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7579] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7581] audit: op="connection-add" uuid="ffe54d53-4e74-4f82-b3f6-2acb4d847d1a" name="br-ex-if" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7923] audit: op="connection-update" uuid="b7cdfc62-b3ac-5a41-99f8-23b040034403" name="ci-private-network" args="ipv4.never-default,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.method,ovs-external-ids.data,connection.controller,connection.master,connection.timestamp,connection.slave-type,connection.port-type,ovs-interface.type" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7942] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7944] audit: op="connection-add" uuid="20eeb2bc-2b31-40b2-b48b-264df3547176" name="vlan20-if" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7961] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7964] audit: op="connection-add" uuid="551729b4-58d0-4df8-af7a-f911ffd370c1" name="vlan21-if" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7980] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.7982] audit: op="connection-add" uuid="4c04b173-302e-4728-859f-ae3a2502b742" name="vlan22-if" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8000] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8002] audit: op="connection-add" uuid="53fada38-875f-4a0f-8ec1-b2c946cf70dd" name="vlan23-if" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8015] audit: op="connection-delete" uuid="17c7b7a5-04f6-3c3d-903e-30cdf5d51276" name="Wired connection 1" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8030] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8040] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8049] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8053] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7edaec59-6fe7-4252-bbba-d1a5c0d0ad3d)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8054] audit: op="connection-activate" uuid="7edaec59-6fe7-4252-bbba-d1a5c0d0ad3d" name="br-ex-br" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8057] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8058] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8064] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8068] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (d010ce4a-21e4-4d8a-81d3-38ff51512fd6)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8070] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8071] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8075] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8079] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (b418d7c6-e5b7-4503-9316-f2bd6857e918)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8081] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8082] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8087] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8091] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (5fb0de4c-86b1-47c8-b4e8-5172814720a6)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8092] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8093] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8098] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8102] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (357f1b85-9a54-4005-9a63-25a42c0f8219)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8103] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8104] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8110] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8113] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (cc548417-5827-4671-991d-645c86b82754)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8114] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8115] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8119] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8124] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (02615510-6d2f-43a1-805d-ad051f7c60e7)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8124] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8126] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8127] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8133] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8134] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8136] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8138] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ffe54d53-4e74-4f82-b3f6-2acb4d847d1a)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8139] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8141] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8142] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8143] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8144] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8151] device (eth1): disconnecting for new activation request.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8151] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8153] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8154] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8155] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8157] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8158] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8159] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8162] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (20eeb2bc-2b31-40b2-b48b-264df3547176)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8163] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8165] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8166] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8166] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8168] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8169] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8171] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8173] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (551729b4-58d0-4df8-af7a-f911ffd370c1)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8174] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8175] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8176] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8177] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8179] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8179] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8181] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8184] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (4c04b173-302e-4728-859f-ae3a2502b742)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8184] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8186] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8187] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8188] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8190] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <warn>  [1765281424.8190] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8192] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8195] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (53fada38-875f-4a0f-8ec1-b2c946cf70dd)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8195] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8197] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8198] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8199] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8200] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8210] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8211] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8213] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8215] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8220] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8223] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8225] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8227] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8228] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8231] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8247] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8253] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8256] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8263] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8268] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8273] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8275] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8280] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8284] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 systemd-udevd[51718]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8289] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8293] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8298] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8305] dhcp4 (eth0): canceled DHCP transaction
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8305] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8305] dhcp4 (eth0): state changed no lease
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8308] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8323] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8327] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51713 uid=0 result="fail" reason="Device is not activated"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8333] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8342] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 09 11:57:04 compute-0 kernel: Timeout policy base is empty
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8351] device (eth1): disconnecting for new activation request.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8352] audit: op="connection-activate" uuid="b7cdfc62-b3ac-5a41-99f8-23b040034403" name="ci-private-network" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8363] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8367] dhcp4 (eth0): state changed new lease, address=38.102.83.98
Dec 09 11:57:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 11:57:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8735] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8749] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51713 uid=0 result="success"
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8750] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8976] device (eth1): Activation: starting connection 'ci-private-network' (b7cdfc62-b3ac-5a41-99f8-23b040034403)
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8982] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8984] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8985] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8987] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8988] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8989] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8990] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.8997] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9001] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9008] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9015] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9021] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9027] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9031] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9035] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9039] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9043] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 kernel: br-ex: entered promiscuous mode
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9047] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9052] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9056] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9060] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9064] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9069] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9088] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 kernel: vlan22: entered promiscuous mode
Dec 09 11:57:04 compute-0 systemd-udevd[51719]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 11:57:04 compute-0 kernel: vlan21: entered promiscuous mode
Dec 09 11:57:04 compute-0 kernel: vlan23: entered promiscuous mode
Dec 09 11:57:04 compute-0 kernel: vlan20: entered promiscuous mode
Dec 09 11:57:04 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9497] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9511] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9518] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9525] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9532] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9539] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9554] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9595] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9601] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9607] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9615] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9623] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9636] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9644] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9654] device (eth1): Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9660] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9663] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9665] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9667] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9669] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9673] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9681] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9689] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9694] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9699] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9704] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9709] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9714] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9719] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 11:57:04 compute-0 NetworkManager[48938]: <info>  [1765281424.9725] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.1475] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51713 uid=0 result="success"
Dec 09 11:57:06 compute-0 sudo[52068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hybeywbkridqhbmhvyspmhkygduumrsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281425.7637863-845-225152388312004/AnsiballZ_async_status.py'
Dec 09 11:57:06 compute-0 sudo[52068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.2919] checkpoint[0x556c002da950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.2923] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51713 uid=0 result="success"
Dec 09 11:57:06 compute-0 python3.9[52070]: ansible-ansible.legacy.async_status Invoked with jid=j180054939483.51707 mode=status _async_dir=/root/.ansible_async
Dec 09 11:57:06 compute-0 sudo[52068]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.5881] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51713 uid=0 result="success"
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.5892] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51713 uid=0 result="success"
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.8009] audit: op="networking-control" arg="global-dns-configuration" pid=51713 uid=0 result="success"
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.8038] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.8065] audit: op="networking-control" arg="global-dns-configuration" pid=51713 uid=0 result="success"
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.8083] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51713 uid=0 result="success"
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.9526] checkpoint[0x556c002daa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 09 11:57:06 compute-0 NetworkManager[48938]: <info>  [1765281426.9530] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51713 uid=0 result="success"
Dec 09 11:57:07 compute-0 ansible-async_wrapper.py[51711]: Module complete (51711)
Dec 09 11:57:07 compute-0 ansible-async_wrapper.py[51710]: Done in kid B.
Dec 09 11:57:09 compute-0 sudo[52174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssoynjukiwabvmbdpaueagevauhdvkhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281425.7637863-845-225152388312004/AnsiballZ_async_status.py'
Dec 09 11:57:09 compute-0 sudo[52174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:09 compute-0 python3.9[52176]: ansible-ansible.legacy.async_status Invoked with jid=j180054939483.51707 mode=status _async_dir=/root/.ansible_async
Dec 09 11:57:09 compute-0 sudo[52174]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:10 compute-0 sudo[52274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nknijdjbsmumuumidvhukuieiavolkby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281425.7637863-845-225152388312004/AnsiballZ_async_status.py'
Dec 09 11:57:10 compute-0 sudo[52274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:10 compute-0 python3.9[52276]: ansible-ansible.legacy.async_status Invoked with jid=j180054939483.51707 mode=cleanup _async_dir=/root/.ansible_async
Dec 09 11:57:10 compute-0 sudo[52274]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:10 compute-0 sudo[52426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fajzelblkeeqnnfbktbpnlmtcztmjbdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281430.6572382-926-50327608487255/AnsiballZ_stat.py'
Dec 09 11:57:10 compute-0 sudo[52426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:11 compute-0 python3.9[52428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:57:11 compute-0 sudo[52426]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:11 compute-0 sudo[52549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvaqafoeeekutwsifpagkhmwxdhnybub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281430.6572382-926-50327608487255/AnsiballZ_copy.py'
Dec 09 11:57:11 compute-0 sudo[52549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:11 compute-0 python3.9[52551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281430.6572382-926-50327608487255/.source.returncode _original_basename=.ywpp3h_e follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:57:11 compute-0 sudo[52549]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:12 compute-0 sudo[52701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqcieqhjsedhvjcjkksuvlfojzydqhdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281431.9626-974-241618613337624/AnsiballZ_stat.py'
Dec 09 11:57:12 compute-0 sudo[52701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:12 compute-0 python3.9[52703]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:57:12 compute-0 sudo[52701]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:12 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 11:57:12 compute-0 sudo[52826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjueksewydernixuxdzhjqjgukhbpjup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281431.9626-974-241618613337624/AnsiballZ_copy.py'
Dec 09 11:57:12 compute-0 sudo[52826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:13 compute-0 python3.9[52828]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281431.9626-974-241618613337624/.source.cfg _original_basename=._bsg3zrt follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:57:13 compute-0 sudo[52826]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:13 compute-0 sudo[52979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvirciztzxqfrzgaeapmeegtnuetojge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281433.281014-1019-129900238022108/AnsiballZ_systemd.py'
Dec 09 11:57:13 compute-0 sudo[52979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:13 compute-0 python3.9[52981]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 11:57:13 compute-0 systemd[1]: Reloading Network Manager...
Dec 09 11:57:13 compute-0 NetworkManager[48938]: <info>  [1765281433.9602] audit: op="reload" arg="0" pid=52985 uid=0 result="success"
Dec 09 11:57:13 compute-0 NetworkManager[48938]: <info>  [1765281433.9613] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 09 11:57:13 compute-0 systemd[1]: Reloaded Network Manager.
Dec 09 11:57:14 compute-0 sudo[52979]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:14 compute-0 sshd-session[44937]: Connection closed by 192.168.122.30 port 36134
Dec 09 11:57:14 compute-0 sshd-session[44934]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:57:14 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 09 11:57:14 compute-0 systemd[1]: session-10.scope: Consumed 56.220s CPU time.
Dec 09 11:57:14 compute-0 systemd-logind[799]: Session 10 logged out. Waiting for processes to exit.
Dec 09 11:57:14 compute-0 systemd-logind[799]: Removed session 10.
Dec 09 11:57:16 compute-0 sshd-session[53014]: Connection closed by authenticating user root 165.232.73.250 port 55128 [preauth]
Dec 09 11:57:19 compute-0 sshd-session[53018]: Accepted publickey for zuul from 192.168.122.30 port 53374 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:57:19 compute-0 systemd-logind[799]: New session 11 of user zuul.
Dec 09 11:57:19 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 09 11:57:20 compute-0 sshd-session[53018]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:57:21 compute-0 python3.9[53171]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:57:22 compute-0 python3.9[53325]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 11:57:23 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 11:57:24 compute-0 python3.9[53520]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:57:24 compute-0 sshd-session[53021]: Connection closed by 192.168.122.30 port 53374
Dec 09 11:57:24 compute-0 sshd-session[53018]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:57:24 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 09 11:57:24 compute-0 systemd[1]: session-11.scope: Consumed 2.470s CPU time.
Dec 09 11:57:24 compute-0 systemd-logind[799]: Session 11 logged out. Waiting for processes to exit.
Dec 09 11:57:24 compute-0 systemd-logind[799]: Removed session 11.
Dec 09 11:57:29 compute-0 sshd-session[53548]: Accepted publickey for zuul from 192.168.122.30 port 33222 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:57:29 compute-0 systemd-logind[799]: New session 12 of user zuul.
Dec 09 11:57:29 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 09 11:57:29 compute-0 sshd-session[53548]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:57:30 compute-0 python3.9[53701]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:57:32 compute-0 python3.9[53855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:57:32 compute-0 sudo[54010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilnbhwyjxjctheeyghurnlfehkjuuyaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281452.4762738-80-78730087042483/AnsiballZ_setup.py'
Dec 09 11:57:32 compute-0 sudo[54010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:33 compute-0 python3.9[54012]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 11:57:33 compute-0 sudo[54010]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:33 compute-0 sudo[54094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blumwaakvlkmgnbbsmtzqsggyanhipcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281452.4762738-80-78730087042483/AnsiballZ_dnf.py'
Dec 09 11:57:33 compute-0 sudo[54094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:33 compute-0 python3.9[54096]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:57:35 compute-0 sudo[54094]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:35 compute-0 sudo[54248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oatdnthwyssmglmosoltuiiavtofrarp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281455.7311318-116-278963057508591/AnsiballZ_setup.py'
Dec 09 11:57:35 compute-0 sudo[54248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:36 compute-0 python3.9[54250]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 11:57:36 compute-0 sudo[54248]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:37 compute-0 sudo[54443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjilgmaiznqulxomtdpcvekryzgltnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281457.0314956-149-124235704794969/AnsiballZ_file.py'
Dec 09 11:57:37 compute-0 sudo[54443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:37 compute-0 python3.9[54445]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:57:37 compute-0 sudo[54443]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:38 compute-0 sudo[54595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sebxselhaxfeulexfkuczgiqywxmuqno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281457.925294-173-206505956661203/AnsiballZ_command.py'
Dec 09 11:57:38 compute-0 sudo[54595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:38 compute-0 python3.9[54597]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2589559109-merged.mount: Deactivated successfully.
Dec 09 11:57:38 compute-0 podman[54598]: 2025-12-09 11:57:38.603770817 +0000 UTC m=+0.060457303 system refresh
Dec 09 11:57:38 compute-0 sudo[54595]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:39 compute-0 sudo[54758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwxvcnairohshctoakhtxwrhgefkeayo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281458.837466-197-126761722458872/AnsiballZ_stat.py'
Dec 09 11:57:39 compute-0 sudo[54758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:39 compute-0 python3.9[54760]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 11:57:39 compute-0 sudo[54758]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:40 compute-0 sudo[54881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rujwkcprofhyvbqjaqdmfeasivoqhfpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281458.837466-197-126761722458872/AnsiballZ_copy.py'
Dec 09 11:57:40 compute-0 sudo[54881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:40 compute-0 python3.9[54883]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281458.837466-197-126761722458872/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c23925a02137898124ec4808beeadf26deed32d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:57:40 compute-0 sudo[54881]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:40 compute-0 sudo[55033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmjboewdxqdghapbgfasvzpnpjwvjkoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281460.5452597-242-168983293699950/AnsiballZ_stat.py'
Dec 09 11:57:40 compute-0 sudo[55033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:41 compute-0 python3.9[55035]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:57:41 compute-0 sudo[55033]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:41 compute-0 sudo[55156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xekaiopvahoksyykegnftcfctzgfecph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281460.5452597-242-168983293699950/AnsiballZ_copy.py'
Dec 09 11:57:41 compute-0 sudo[55156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:41 compute-0 python3.9[55158]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765281460.5452597-242-168983293699950/.source.conf follow=False _original_basename=registries.conf.j2 checksum=bd8960d09011f95ec8946d00609d580926fa47cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:57:41 compute-0 sudo[55156]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:42 compute-0 sudo[55308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvwrjtdaqqrtuldevyijkgtpghshgdrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281461.8309727-290-152610639445202/AnsiballZ_ini_file.py'
Dec 09 11:57:42 compute-0 sudo[55308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:42 compute-0 python3.9[55310]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:57:42 compute-0 sudo[55308]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:42 compute-0 sudo[55460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxtdowkpxxflwlfjdsmtaftfokedlvsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281462.659191-290-74208154185335/AnsiballZ_ini_file.py'
Dec 09 11:57:42 compute-0 sudo[55460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:43 compute-0 python3.9[55462]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:57:43 compute-0 sudo[55460]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:43 compute-0 sudo[55612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhlxcazutjeqybgrwlxokgxuyfofvsfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281463.3009114-290-150862480458344/AnsiballZ_ini_file.py'
Dec 09 11:57:43 compute-0 sudo[55612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:43 compute-0 python3.9[55614]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:57:43 compute-0 sudo[55612]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:44 compute-0 sudo[55764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzmwoiwcbbthcelulzbrfadkdgbfmfjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281463.9163547-290-67666900744551/AnsiballZ_ini_file.py'
Dec 09 11:57:44 compute-0 sudo[55764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:44 compute-0 python3.9[55766]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:57:44 compute-0 sudo[55764]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:45 compute-0 sudo[55916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akvnqorhzyxaeshhbcdfnysbachwglls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281464.7201705-383-161231854799846/AnsiballZ_dnf.py'
Dec 09 11:57:45 compute-0 sudo[55916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:45 compute-0 python3.9[55918]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:57:46 compute-0 sudo[55916]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:47 compute-0 sudo[56069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifzatghjxmgmpztsrkiryuufxsghonas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281467.403369-416-197121955852436/AnsiballZ_setup.py'
Dec 09 11:57:47 compute-0 sudo[56069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:48 compute-0 python3.9[56071]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:57:48 compute-0 sudo[56069]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:48 compute-0 sudo[56223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crtaudfjzadsftwuhnaaxdupoahwwuca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281468.247331-440-175287588086696/AnsiballZ_stat.py'
Dec 09 11:57:48 compute-0 sudo[56223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:48 compute-0 python3.9[56225]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:57:48 compute-0 sudo[56223]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:49 compute-0 sudo[56375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqfcxuymmojuqcvhyvvwrbghwuunnepj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281469.036965-467-213373647275563/AnsiballZ_stat.py'
Dec 09 11:57:49 compute-0 sudo[56375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:49 compute-0 python3.9[56377]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:57:49 compute-0 sudo[56375]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:50 compute-0 sudo[56527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfgcljrnhwkfcxkhokjydgxfpuqpbxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281469.9529352-497-31695017498145/AnsiballZ_command.py'
Dec 09 11:57:50 compute-0 sudo[56527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:50 compute-0 python3.9[56529]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:57:50 compute-0 sudo[56527]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:51 compute-0 sudo[56680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luqgbsbckkfchbionfatnmvnnbziqtho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281470.736273-527-21721305887986/AnsiballZ_service_facts.py'
Dec 09 11:57:51 compute-0 sudo[56680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:51 compute-0 python3.9[56682]: ansible-service_facts Invoked
Dec 09 11:57:51 compute-0 network[56699]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 11:57:51 compute-0 network[56700]: 'network-scripts' will be removed from distribution in near future.
Dec 09 11:57:51 compute-0 network[56701]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 11:57:53 compute-0 sudo[56680]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:55 compute-0 sudo[56984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwunctpjqbmdbusdyanlfrsybudnywo ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765281474.8869486-572-119205751334772/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765281474.8869486-572-119205751334772/args'
Dec 09 11:57:55 compute-0 sudo[56984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:55 compute-0 sudo[56984]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:55 compute-0 sudo[57151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ammhvmxsncwflucyobmlwtbnhhtmdpxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281475.6264615-605-215999244393962/AnsiballZ_dnf.py'
Dec 09 11:57:55 compute-0 sudo[57151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:56 compute-0 python3.9[57153]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 11:57:57 compute-0 sudo[57151]: pam_unix(sudo:session): session closed for user root
Dec 09 11:57:59 compute-0 sudo[57304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuyputhmqaesgfauqengabufnolyfuyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281478.691598-644-4363282729402/AnsiballZ_package_facts.py'
Dec 09 11:57:59 compute-0 sudo[57304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:57:59 compute-0 python3.9[57306]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 09 11:57:59 compute-0 sudo[57304]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:00 compute-0 sudo[57456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkdqjsqxjtfegrczyodxdywningvjlgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281480.6433642-674-246708586624227/AnsiballZ_stat.py'
Dec 09 11:58:00 compute-0 sudo[57456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:01 compute-0 python3.9[57458]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:01 compute-0 sudo[57456]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:01 compute-0 sudo[57581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klclgezrsweiihsmozglmhhjzjcivqpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281480.6433642-674-246708586624227/AnsiballZ_copy.py'
Dec 09 11:58:01 compute-0 sudo[57581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:01 compute-0 python3.9[57583]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281480.6433642-674-246708586624227/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:01 compute-0 sudo[57581]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:02 compute-0 sudo[57735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knuumuaemyhtgvupkzklpdlqgyvzogra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281482.123079-719-222673111465763/AnsiballZ_stat.py'
Dec 09 11:58:02 compute-0 sudo[57735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:02 compute-0 python3.9[57737]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:02 compute-0 sudo[57735]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:02 compute-0 sudo[57860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wixnlnjwiubqhsoqycmvchztbxahkltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281482.123079-719-222673111465763/AnsiballZ_copy.py'
Dec 09 11:58:02 compute-0 sudo[57860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:03 compute-0 python3.9[57862]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281482.123079-719-222673111465763/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:03 compute-0 sudo[57860]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:04 compute-0 sudo[58014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmgydtwhwboyurswyjlbhztzclaofjpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281484.2421236-782-263444313926232/AnsiballZ_lineinfile.py'
Dec 09 11:58:04 compute-0 sudo[58014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:04 compute-0 python3.9[58016]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:05 compute-0 sudo[58014]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:06 compute-0 sudo[58168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xszaqowkzrqwffwmsemqxzldflqtqfzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281486.0893755-827-37013009669186/AnsiballZ_setup.py'
Dec 09 11:58:06 compute-0 sudo[58168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:06 compute-0 python3.9[58170]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 11:58:07 compute-0 sudo[58168]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:07 compute-0 sudo[58252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjtbityyqipoqxbpndeavnmjvwwjvtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281486.0893755-827-37013009669186/AnsiballZ_systemd.py'
Dec 09 11:58:07 compute-0 sudo[58252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:07 compute-0 python3.9[58254]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:58:07 compute-0 sudo[58252]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:09 compute-0 sudo[58406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htrhzdcboallewsoplgutcnacjyupvjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281488.8007035-875-29232521397098/AnsiballZ_setup.py'
Dec 09 11:58:09 compute-0 sudo[58406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:09 compute-0 python3.9[58408]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 11:58:09 compute-0 sudo[58406]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:09 compute-0 sudo[58490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkjwxthjjxkiekrynbwazlclmdnpwemy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281488.8007035-875-29232521397098/AnsiballZ_systemd.py'
Dec 09 11:58:09 compute-0 sudo[58490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:10 compute-0 python3.9[58492]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 11:58:10 compute-0 systemd[1]: Stopping NTP client/server...
Dec 09 11:58:10 compute-0 chronyd[790]: chronyd exiting
Dec 09 11:58:10 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 09 11:58:10 compute-0 systemd[1]: Stopped NTP client/server.
Dec 09 11:58:10 compute-0 systemd[1]: Starting NTP client/server...
Dec 09 11:58:10 compute-0 chronyd[58501]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 09 11:58:10 compute-0 chronyd[58501]: Frequency -31.651 +/- 0.069 ppm read from /var/lib/chrony/drift
Dec 09 11:58:10 compute-0 chronyd[58501]: Loaded seccomp filter (level 2)
Dec 09 11:58:10 compute-0 systemd[1]: Started NTP client/server.
Dec 09 11:58:10 compute-0 sudo[58490]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:11 compute-0 sshd-session[53551]: Connection closed by 192.168.122.30 port 33222
Dec 09 11:58:11 compute-0 sshd-session[53548]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:58:11 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 09 11:58:11 compute-0 systemd[1]: session-12.scope: Consumed 26.635s CPU time.
Dec 09 11:58:11 compute-0 systemd-logind[799]: Session 12 logged out. Waiting for processes to exit.
Dec 09 11:58:11 compute-0 systemd-logind[799]: Removed session 12.
Dec 09 11:58:16 compute-0 sshd-session[58527]: Accepted publickey for zuul from 192.168.122.30 port 43750 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:58:16 compute-0 systemd-logind[799]: New session 13 of user zuul.
Dec 09 11:58:16 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 09 11:58:16 compute-0 sshd-session[58527]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:58:17 compute-0 sudo[58680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myjxvftwloxvhrslqiplanlcygrpdeij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281496.8054953-26-77934831482249/AnsiballZ_file.py'
Dec 09 11:58:17 compute-0 sudo[58680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:17 compute-0 python3.9[58682]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:17 compute-0 sudo[58680]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:18 compute-0 sudo[58832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bludznrqhrxtqountwmnyavftwjntnyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281497.669159-62-151734953958009/AnsiballZ_stat.py'
Dec 09 11:58:18 compute-0 sudo[58832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:18 compute-0 python3.9[58834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:18 compute-0 sudo[58832]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:18 compute-0 sudo[58955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbjtbntxytbeahiytsqvluncdxbhjrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281497.669159-62-151734953958009/AnsiballZ_copy.py'
Dec 09 11:58:18 compute-0 sudo[58955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:19 compute-0 python3.9[58957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281497.669159-62-151734953958009/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:19 compute-0 sudo[58955]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:19 compute-0 sshd-session[58530]: Connection closed by 192.168.122.30 port 43750
Dec 09 11:58:19 compute-0 sshd-session[58527]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:58:19 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 09 11:58:19 compute-0 systemd[1]: session-13.scope: Consumed 1.699s CPU time.
Dec 09 11:58:19 compute-0 systemd-logind[799]: Session 13 logged out. Waiting for processes to exit.
Dec 09 11:58:19 compute-0 systemd-logind[799]: Removed session 13.
Dec 09 11:58:25 compute-0 sshd-session[58982]: Accepted publickey for zuul from 192.168.122.30 port 58278 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:58:25 compute-0 systemd-logind[799]: New session 14 of user zuul.
Dec 09 11:58:25 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 09 11:58:25 compute-0 sshd-session[58982]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:58:26 compute-0 python3.9[59135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:58:27 compute-0 sudo[59289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlpojlncyeyfapauisvgpcxpwngcdlmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281506.811978-59-17522451640603/AnsiballZ_file.py'
Dec 09 11:58:27 compute-0 sudo[59289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:27 compute-0 python3.9[59291]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:27 compute-0 sudo[59289]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:28 compute-0 sudo[59464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsjezqnxlkwoftlvnjtnprvberkgrplw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281507.737871-83-59378788925287/AnsiballZ_stat.py'
Dec 09 11:58:28 compute-0 sudo[59464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:28 compute-0 python3.9[59466]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:28 compute-0 sudo[59464]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:28 compute-0 sudo[59587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdwnebvdngtnlbpobjhsorajdxhwfesu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281507.737871-83-59378788925287/AnsiballZ_copy.py'
Dec 09 11:58:28 compute-0 sudo[59587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:29 compute-0 python3.9[59589]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765281507.737871-83-59378788925287/.source.json _original_basename=.ryfsabec follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:29 compute-0 sudo[59587]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:29 compute-0 sudo[59739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zatdxhmwlywtwhwkbtylnedrfyivljwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281509.6287148-152-181444938603539/AnsiballZ_stat.py'
Dec 09 11:58:29 compute-0 sudo[59739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:30 compute-0 python3.9[59741]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:30 compute-0 sudo[59739]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:30 compute-0 sudo[59862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxekombkhyugihxdkqbqqlbdfkdkbadn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281509.6287148-152-181444938603539/AnsiballZ_copy.py'
Dec 09 11:58:30 compute-0 sudo[59862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:30 compute-0 python3.9[59864]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281509.6287148-152-181444938603539/.source _original_basename=.wgln1gch follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:30 compute-0 sudo[59862]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:31 compute-0 sudo[60014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqrlhpbyyijszfcuiuowpiwmtkthayp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281510.9069202-200-146908564994439/AnsiballZ_file.py'
Dec 09 11:58:31 compute-0 sudo[60014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:31 compute-0 python3.9[60016]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:58:31 compute-0 sudo[60014]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:32 compute-0 sudo[60166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxfqrgzxemgavvbaoxvdofjxhjycloyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281511.8017533-224-11919527900531/AnsiballZ_stat.py'
Dec 09 11:58:32 compute-0 sudo[60166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:32 compute-0 python3.9[60168]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:32 compute-0 sudo[60166]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:32 compute-0 sudo[60289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqpteldkvscbkkwehtgeaflepgkbyuvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281511.8017533-224-11919527900531/AnsiballZ_copy.py'
Dec 09 11:58:32 compute-0 sudo[60289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:32 compute-0 python3.9[60291]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765281511.8017533-224-11919527900531/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:58:32 compute-0 sudo[60289]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:33 compute-0 sudo[60441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jorzgormxhxhtakwasgeftwmzbovpsrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281512.9769948-224-137276540223055/AnsiballZ_stat.py'
Dec 09 11:58:33 compute-0 sudo[60441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:33 compute-0 python3.9[60443]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:33 compute-0 sudo[60441]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:33 compute-0 sudo[60564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjzicrkjbidxxqvyimzmvrpnvejuucpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281512.9769948-224-137276540223055/AnsiballZ_copy.py'
Dec 09 11:58:33 compute-0 sudo[60564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:33 compute-0 python3.9[60566]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765281512.9769948-224-137276540223055/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 11:58:34 compute-0 sudo[60564]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:34 compute-0 sudo[60716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oluprqctqynsjakukzwoptpumllffcqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281514.289269-311-60927636185763/AnsiballZ_file.py'
Dec 09 11:58:34 compute-0 sudo[60716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:34 compute-0 python3.9[60718]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:34 compute-0 sudo[60716]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:35 compute-0 sudo[60868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rowuwbnocplntqxwzywkjjnvjqwlonaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281514.9936879-335-199784345811324/AnsiballZ_stat.py'
Dec 09 11:58:35 compute-0 sudo[60868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:35 compute-0 python3.9[60870]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:35 compute-0 sudo[60868]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:36 compute-0 sudo[60991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricbcepwifodofwwmswiooxekmzfcbjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281514.9936879-335-199784345811324/AnsiballZ_copy.py'
Dec 09 11:58:36 compute-0 sudo[60991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:36 compute-0 python3.9[60993]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281514.9936879-335-199784345811324/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:36 compute-0 sudo[60991]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:36 compute-0 sudo[61143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apdwlanzgzyprgzcctppknvlptnatbew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281516.558261-380-244880511782862/AnsiballZ_stat.py'
Dec 09 11:58:36 compute-0 sudo[61143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:37 compute-0 python3.9[61145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:37 compute-0 sudo[61143]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:37 compute-0 sudo[61266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byhcvbkadbohgkaixeguufuggrfrhjup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281516.558261-380-244880511782862/AnsiballZ_copy.py'
Dec 09 11:58:37 compute-0 sudo[61266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:37 compute-0 python3.9[61268]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281516.558261-380-244880511782862/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:37 compute-0 sudo[61266]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:38 compute-0 sudo[61418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbeawzeldbuqbexkieioejckbvyhgsmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281517.8144422-425-148852945302353/AnsiballZ_systemd.py'
Dec 09 11:58:38 compute-0 sudo[61418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:38 compute-0 python3.9[61420]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:58:38 compute-0 systemd[1]: Reloading.
Dec 09 11:58:38 compute-0 systemd-rc-local-generator[61446]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:58:38 compute-0 systemd-sysv-generator[61449]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:58:39 compute-0 systemd[1]: Reloading.
Dec 09 11:58:39 compute-0 systemd-rc-local-generator[61484]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:58:39 compute-0 systemd-sysv-generator[61487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:58:39 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 09 11:58:39 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 09 11:58:39 compute-0 sudo[61418]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:39 compute-0 sudo[61647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtuqfywmkukcgimlwygagokhxddbjdqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281519.5517359-449-35581995502282/AnsiballZ_stat.py'
Dec 09 11:58:39 compute-0 sudo[61647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:40 compute-0 python3.9[61649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:40 compute-0 sudo[61647]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:40 compute-0 sudo[61770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vthslhjaxouqplrpsokcupatkyrkgdhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281519.5517359-449-35581995502282/AnsiballZ_copy.py'
Dec 09 11:58:40 compute-0 sudo[61770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:40 compute-0 python3.9[61772]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281519.5517359-449-35581995502282/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:40 compute-0 sudo[61770]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:41 compute-0 sudo[61922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbvazvrqnbmcwzjxncxmvhdnegkcdfoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281521.385619-494-74997141087359/AnsiballZ_stat.py'
Dec 09 11:58:41 compute-0 sudo[61922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:41 compute-0 python3.9[61924]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:41 compute-0 sudo[61922]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:42 compute-0 sudo[62045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucdxafmuvsulyliqqgghgkncnjszhjvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281521.385619-494-74997141087359/AnsiballZ_copy.py'
Dec 09 11:58:42 compute-0 sudo[62045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:42 compute-0 python3.9[62047]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281521.385619-494-74997141087359/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:42 compute-0 sudo[62045]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:42 compute-0 sudo[62197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfvmrctpetylpkiinfzzvenknfigmwsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281522.62447-539-95484174735684/AnsiballZ_systemd.py'
Dec 09 11:58:42 compute-0 sudo[62197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:43 compute-0 python3.9[62199]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:58:43 compute-0 systemd[1]: Reloading.
Dec 09 11:58:43 compute-0 systemd-rc-local-generator[62226]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:58:43 compute-0 systemd-sysv-generator[62231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:58:43 compute-0 systemd[1]: Reloading.
Dec 09 11:58:43 compute-0 systemd-rc-local-generator[62265]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:58:43 compute-0 systemd-sysv-generator[62268]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:58:43 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 11:58:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 11:58:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 11:58:43 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 11:58:43 compute-0 sudo[62197]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:44 compute-0 python3.9[62427]: ansible-ansible.builtin.service_facts Invoked
Dec 09 11:58:44 compute-0 network[62444]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 11:58:44 compute-0 network[62445]: 'network-scripts' will be removed from distribution in near future.
Dec 09 11:58:44 compute-0 network[62446]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 11:58:48 compute-0 sudo[62706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txyajoaiacgdmpbpwkyyfhhymnawcqcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281528.3579352-587-59465923254495/AnsiballZ_systemd.py'
Dec 09 11:58:48 compute-0 sudo[62706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:49 compute-0 python3.9[62708]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:58:49 compute-0 systemd[1]: Reloading.
Dec 09 11:58:49 compute-0 systemd-rc-local-generator[62737]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:58:49 compute-0 systemd-sysv-generator[62740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:58:49 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 09 11:58:49 compute-0 iptables.init[62747]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 09 11:58:49 compute-0 iptables.init[62747]: iptables: Flushing firewall rules: [  OK  ]
Dec 09 11:58:49 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 09 11:58:49 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 09 11:58:49 compute-0 sudo[62706]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:50 compute-0 sudo[62942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwiopnjvbbzfmvbtjbftzjujighyfvsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281530.024161-587-225791975293237/AnsiballZ_systemd.py'
Dec 09 11:58:50 compute-0 sudo[62942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:50 compute-0 python3.9[62944]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:58:50 compute-0 sudo[62942]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:51 compute-0 sudo[63096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rejvjfbqcnavjcbrsbnzujxyziqxrqor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281530.9755695-635-102175926768623/AnsiballZ_systemd.py'
Dec 09 11:58:51 compute-0 sudo[63096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:51 compute-0 python3.9[63098]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 11:58:51 compute-0 systemd[1]: Reloading.
Dec 09 11:58:51 compute-0 systemd-rc-local-generator[63128]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 11:58:51 compute-0 systemd-sysv-generator[63131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 11:58:51 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 09 11:58:51 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 09 11:58:51 compute-0 sudo[63096]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:52 compute-0 sudo[63288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfnajazkvucijfvocevxlptxmjkflfye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281532.1806548-659-84046323194755/AnsiballZ_command.py'
Dec 09 11:58:52 compute-0 sudo[63288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:52 compute-0 python3.9[63290]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:58:52 compute-0 sudo[63288]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:53 compute-0 sudo[63441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipwexvssgqmpvrsxqmbbnfrcvpfafwqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281533.49476-701-162220986986588/AnsiballZ_stat.py'
Dec 09 11:58:53 compute-0 sudo[63441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:54 compute-0 python3.9[63443]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:54 compute-0 sudo[63441]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:54 compute-0 sudo[63566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgcajzupderhkuqytsrqzjgumxkjoeax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281533.49476-701-162220986986588/AnsiballZ_copy.py'
Dec 09 11:58:54 compute-0 sudo[63566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:54 compute-0 python3.9[63568]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281533.49476-701-162220986986588/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:54 compute-0 sudo[63566]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:55 compute-0 sudo[63719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpydoydvhtspgszhrkzxpvihmkrtmrjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281534.9316761-746-96481926480495/AnsiballZ_systemd.py'
Dec 09 11:58:55 compute-0 sudo[63719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:55 compute-0 python3.9[63721]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 11:58:55 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 09 11:58:55 compute-0 sshd[1008]: Received SIGHUP; restarting.
Dec 09 11:58:55 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 09 11:58:55 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Dec 09 11:58:55 compute-0 sshd[1008]: Server listening on :: port 22.
Dec 09 11:58:55 compute-0 sudo[63719]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:56 compute-0 sudo[63875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sreqvqhhwnreeohuiicpfavtvehaqovk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281535.8589458-770-219711580095755/AnsiballZ_file.py'
Dec 09 11:58:56 compute-0 sudo[63875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:56 compute-0 python3.9[63877]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:56 compute-0 sudo[63875]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:56 compute-0 sudo[64027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgofsqnibonzhlnnldvhtlikpmksmecf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281536.5584795-794-219526003263567/AnsiballZ_stat.py'
Dec 09 11:58:56 compute-0 sudo[64027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:57 compute-0 python3.9[64029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:58:57 compute-0 sudo[64027]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:57 compute-0 sudo[64150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpdaorvidfpweqblxaitmkmnfombegak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281536.5584795-794-219526003263567/AnsiballZ_copy.py'
Dec 09 11:58:57 compute-0 sudo[64150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:57 compute-0 python3.9[64152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281536.5584795-794-219526003263567/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:57 compute-0 sudo[64150]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:58 compute-0 sudo[64302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldvxmddwxjqvdfojluauihifcryilvcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281538.2820015-848-163341586165871/AnsiballZ_timezone.py'
Dec 09 11:58:58 compute-0 sudo[64302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:58 compute-0 python3.9[64304]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 09 11:58:59 compute-0 systemd[1]: Starting Time & Date Service...
Dec 09 11:58:59 compute-0 systemd[1]: Started Time & Date Service.
Dec 09 11:58:59 compute-0 sudo[64302]: pam_unix(sudo:session): session closed for user root
Dec 09 11:58:59 compute-0 sudo[64458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bogrylcntxvasyhgulbsalzndrnmjuhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281539.4169664-875-97114191974928/AnsiballZ_file.py'
Dec 09 11:58:59 compute-0 sudo[64458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:58:59 compute-0 python3.9[64460]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:58:59 compute-0 sudo[64458]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:00 compute-0 sudo[64610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uquqzpkxfbjwzdtqczhgobmmhlyltjnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281540.1623936-899-78586486775564/AnsiballZ_stat.py'
Dec 09 11:59:00 compute-0 sudo[64610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:00 compute-0 python3.9[64612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:59:00 compute-0 sudo[64610]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:01 compute-0 sudo[64733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mylqzideqzflwstnewcgwwlloxefgrez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281540.1623936-899-78586486775564/AnsiballZ_copy.py'
Dec 09 11:59:01 compute-0 sudo[64733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:01 compute-0 python3.9[64735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281540.1623936-899-78586486775564/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:01 compute-0 sudo[64733]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:01 compute-0 sudo[64885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevunyxgrnogeunsweruehygpazjsmme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281541.4473605-944-3134616770019/AnsiballZ_stat.py'
Dec 09 11:59:01 compute-0 sudo[64885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:01 compute-0 python3.9[64887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:59:01 compute-0 sudo[64885]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:02 compute-0 sudo[65008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcemnfozumzwdyoujvyagbebgdfhsixf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281541.4473605-944-3134616770019/AnsiballZ_copy.py'
Dec 09 11:59:02 compute-0 sudo[65008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:02 compute-0 python3.9[65010]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765281541.4473605-944-3134616770019/.source.yaml _original_basename=.1qb2phy7 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:02 compute-0 sudo[65008]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:03 compute-0 sudo[65160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfypacovsqmznragblzscudjjpcbgfzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281542.8767908-989-185839910344255/AnsiballZ_stat.py'
Dec 09 11:59:03 compute-0 sudo[65160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:03 compute-0 python3.9[65162]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:59:03 compute-0 sudo[65160]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:03 compute-0 sudo[65283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diuzymrdvkmyoaggknqgylbaxvotzgvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281542.8767908-989-185839910344255/AnsiballZ_copy.py'
Dec 09 11:59:04 compute-0 sudo[65283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:04 compute-0 python3.9[65286]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281542.8767908-989-185839910344255/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:05 compute-0 sudo[65283]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:05 compute-0 sudo[65436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjtbwqhlzullexynqqedjvurfqjogmsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281545.1786773-1034-254530739115980/AnsiballZ_command.py'
Dec 09 11:59:05 compute-0 sudo[65436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:05 compute-0 python3.9[65438]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:59:05 compute-0 sudo[65436]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:06 compute-0 sudo[65589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlkuprewaczhhbxbkhnjgwlcmjbpnsvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281545.9038205-1058-239255407408843/AnsiballZ_command.py'
Dec 09 11:59:06 compute-0 sudo[65589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:06 compute-0 python3.9[65591]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:59:06 compute-0 sudo[65589]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:07 compute-0 sudo[65742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yswycwpkwroxatitvenmgxznwrvqeqkt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765281546.6710985-1082-185358109636740/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 11:59:07 compute-0 sudo[65742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:07 compute-0 python3[65744]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 11:59:07 compute-0 sudo[65742]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:08 compute-0 sudo[65894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdcepsrbnjfjfxobohdvyszdndmzxiji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281547.8993142-1106-148310156244115/AnsiballZ_stat.py'
Dec 09 11:59:08 compute-0 sudo[65894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:08 compute-0 python3.9[65896]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:59:08 compute-0 sudo[65894]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:08 compute-0 sudo[66017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekuctizzqgvthvepizyxgnjuqnxjwwut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281547.8993142-1106-148310156244115/AnsiballZ_copy.py'
Dec 09 11:59:08 compute-0 sudo[66017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:09 compute-0 python3.9[66019]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281547.8993142-1106-148310156244115/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:09 compute-0 sudo[66017]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:09 compute-0 sudo[66169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ermrnmpdjfmvgfqvggdndfgxkzfhduou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281549.2641125-1151-124965957353282/AnsiballZ_stat.py'
Dec 09 11:59:09 compute-0 sudo[66169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:09 compute-0 python3.9[66171]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:59:09 compute-0 sudo[66169]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:10 compute-0 sudo[66292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mltljndzmqfhjgghgebccvqbvxgkbojd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281549.2641125-1151-124965957353282/AnsiballZ_copy.py'
Dec 09 11:59:10 compute-0 sudo[66292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:10 compute-0 python3.9[66294]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281549.2641125-1151-124965957353282/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:10 compute-0 sudo[66292]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:10 compute-0 sudo[66444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvxxybzcxnhctdllsgghghzykuaaexsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281550.6241024-1196-32574537430498/AnsiballZ_stat.py'
Dec 09 11:59:10 compute-0 sudo[66444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:11 compute-0 python3.9[66446]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:59:11 compute-0 sudo[66444]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:11 compute-0 sudo[66567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yagbrelrpimyngpkuyworjuzjiqleiba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281550.6241024-1196-32574537430498/AnsiballZ_copy.py'
Dec 09 11:59:11 compute-0 sudo[66567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:11 compute-0 python3.9[66569]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281550.6241024-1196-32574537430498/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:11 compute-0 sudo[66567]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:12 compute-0 sudo[66719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjutjvjggxaacahxzbmvrwemxaluklfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281552.1648622-1241-94805196816972/AnsiballZ_stat.py'
Dec 09 11:59:12 compute-0 sudo[66719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:12 compute-0 python3.9[66721]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:59:12 compute-0 sudo[66719]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:13 compute-0 sudo[66842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhhzmqyharujlzzkpndoedadmbfybnbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281552.1648622-1241-94805196816972/AnsiballZ_copy.py'
Dec 09 11:59:13 compute-0 sudo[66842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:13 compute-0 python3.9[66844]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281552.1648622-1241-94805196816972/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:13 compute-0 sudo[66842]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:13 compute-0 sudo[66994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pipksazhweuxkjkkusyshkbbmbeblyfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281553.5341604-1286-84073775715414/AnsiballZ_stat.py'
Dec 09 11:59:13 compute-0 sudo[66994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:14 compute-0 python3.9[66996]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 11:59:14 compute-0 sudo[66994]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:14 compute-0 sudo[67117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladashhfrgetdzufrwzavggabakgciae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281553.5341604-1286-84073775715414/AnsiballZ_copy.py'
Dec 09 11:59:14 compute-0 sudo[67117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:14 compute-0 python3.9[67119]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765281553.5341604-1286-84073775715414/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:14 compute-0 sudo[67117]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:15 compute-0 sudo[67269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvygzltfmzddjolpkwcojlbvbzryehc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281554.8912404-1331-45629618448090/AnsiballZ_file.py'
Dec 09 11:59:15 compute-0 sudo[67269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:15 compute-0 python3.9[67271]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:15 compute-0 sudo[67269]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:15 compute-0 sudo[67421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gerdtsmppgynyqeuoszpipckwggkfueh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281555.603063-1355-246209688502434/AnsiballZ_command.py'
Dec 09 11:59:15 compute-0 sudo[67421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:16 compute-0 python3.9[67423]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:59:16 compute-0 sudo[67421]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:17 compute-0 sudo[67580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbehcmljdsbbqwpjtgtezvybtomemram ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281556.4125168-1379-251167942754239/AnsiballZ_blockinfile.py'
Dec 09 11:59:17 compute-0 sudo[67580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:17 compute-0 python3.9[67582]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:17 compute-0 sudo[67580]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:17 compute-0 sudo[67733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rexlbyztkrlnwzitdoxzeuygtxefaleh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281557.6422856-1406-218423117136154/AnsiballZ_file.py'
Dec 09 11:59:17 compute-0 sudo[67733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:18 compute-0 python3.9[67735]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:18 compute-0 sudo[67733]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:18 compute-0 sudo[67885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfvcfifvhydyftvguolmwwxofulorphc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281558.3187168-1406-230602275141399/AnsiballZ_file.py'
Dec 09 11:59:18 compute-0 sudo[67885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:18 compute-0 python3.9[67887]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:18 compute-0 sudo[67885]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:19 compute-0 sudo[68037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aemugpnyjvshuqdycarwswceldceumlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281559.0292287-1451-236500089083965/AnsiballZ_mount.py'
Dec 09 11:59:19 compute-0 sudo[68037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:19 compute-0 python3.9[68039]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 09 11:59:19 compute-0 sudo[68037]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:20 compute-0 sudo[68190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sapelmvsdqptgarolivaiovjuafafool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281559.8877854-1451-122890902023528/AnsiballZ_mount.py'
Dec 09 11:59:20 compute-0 sudo[68190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:20 compute-0 python3.9[68192]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 09 11:59:20 compute-0 sudo[68190]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:20 compute-0 sshd-session[58985]: Connection closed by 192.168.122.30 port 58278
Dec 09 11:59:20 compute-0 sshd-session[58982]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:59:20 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 09 11:59:20 compute-0 systemd[1]: session-14.scope: Consumed 36.663s CPU time.
Dec 09 11:59:20 compute-0 systemd-logind[799]: Session 14 logged out. Waiting for processes to exit.
Dec 09 11:59:20 compute-0 systemd-logind[799]: Removed session 14.
Dec 09 11:59:26 compute-0 sshd-session[68219]: Accepted publickey for zuul from 192.168.122.30 port 57228 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:59:26 compute-0 systemd-logind[799]: New session 15 of user zuul.
Dec 09 11:59:26 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 09 11:59:26 compute-0 sshd-session[68219]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:59:26 compute-0 sudo[68372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iazaveididreogpvmmgtmxhtmmyfijcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281566.4221265-18-215302885455752/AnsiballZ_tempfile.py'
Dec 09 11:59:26 compute-0 sudo[68372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:27 compute-0 python3.9[68374]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 09 11:59:27 compute-0 sudo[68372]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:27 compute-0 sudo[68524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lumqnitmtrrgkvheshbtpxrszteiglon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281567.3520765-54-48298319704530/AnsiballZ_stat.py'
Dec 09 11:59:27 compute-0 sudo[68524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:27 compute-0 python3.9[68526]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:59:28 compute-0 sudo[68524]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:28 compute-0 sudo[68676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erlrywhgmuwnrspejkvjruvgjqpcnthn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281568.3239508-84-268350280357594/AnsiballZ_setup.py'
Dec 09 11:59:28 compute-0 sudo[68676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:29 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 09 11:59:29 compute-0 python3.9[68678]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:59:29 compute-0 sudo[68676]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:29 compute-0 sudo[68830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfwsmxnwexoxaqkafgadhgwuawwtamxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281569.5280445-109-253525362148213/AnsiballZ_blockinfile.py'
Dec 09 11:59:30 compute-0 sudo[68830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:30 compute-0 python3.9[68832]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAFFEejnRxYeLroQtatG2+9otmxzBbszV6jyFQNwrN1isFrdA6l9z+A7uKJra8t9d4RS6RhpFji8kbmxU4P2wPnJwdoASaXZEKJZm48y80WgyFwDtPSY+qEC7ZwAkTvdWwR/7GeksW+8WHqRYD3piw5eX4m8m6wxdkbhR6yNtblC1g5CzjbRVciDXh/wZFzDELukduyFQGLBkV7n5J/b7tJt3s2khsP4XQsqVu2bfkAygIrkO0ccn6JOk1/cozliJdRb02RvLsslTzyfl4WhYKb48pBTLveFtVnAlL+u1Oeq/iX5YVntRFsTlkgBCwD2KPXXO9jUWs4efwkpVS/+LvvyPHrRuhxWIktPo6yDI9XNrWdCfFRjD+a/zhtCTRVT3FsRWok1k+xdZMmckUPTYUhGaRXQ+9OgrT+Mbg6haSumfCrRIeRAvczt0TCjx51L1Dxryrwo2ft1asQOQ9eyBrxks8y4FUht/B558sHdjnb6BnLWxI04g4nOj5Y9L8jsE=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzOxqNHp13+CJ+qyJfI6CIhaOrwG5G/VqIMEz1SSksd
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDRLuQz6aJKG09USyJTSTw8UZ3WJU9htDuYEN8RgPBWfX/CTIRnFmDgwZ45+nLrHX49CbqSpWZ4axkhwSr2OoI0=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2epW6WQpwBRBUMgzOneBIsK32aZx329Bp5PjWPeNrp8XUtGRG3i5CBW/vOUGs/LjpjeK5lylyHlL2vJFtmKY12W9lLD/WsDGC7Pes99TEB032jy3wxlzS2lYCHnZSkXfYPbt/kFNrHOakMooJGDcb0iYm7CvW+e4PVUDxfx7gmV6dJO9GnTRYFPh2j5CpCylQ94ZSCv3435Y+mR+Szl97KZ4Fn5KOfac0nVjgLphNdxFNacRXsa9syvHI7A1b+iFde19waMMADc8VlsueJEzZtDV3Kb2Vd4Nr/3Q1RHZ+k1N74E7EH9SsWbAuSLmHlCv+YxNOAFFxR2+6Cv88ObgV18U0I82n2Fc0xkdFAVONm7/CSqKDrPQTJ7aBiclQQ7CU/y8K5g+YmiYoI6fazssNoIB1JNhvDYbLqD3NPvpcQYD0WvAvfCU3uL21Sbkkx3GO6kEpuOu26v91nfrdATq1dCEWMfD3b1u/k0ytleKq+wv3uv+gUVysXkCoRCVXCaU=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEI6IdFGeXSq2OtyH4vqnbIX+KNbIsjflnhhoDqsPrYD
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLwgv1glkl8mV3etxvfUyCD1hqpghskXsWKD3kgF/pXqgrNRH5iTgiC/jMJtH6NK3yR/03T7YBbwvpguMadywP8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxFINyyWzDYiZ/uVnMykhkpiH9QfGZivHhxFV4umw/sLT/KxtRmnqcS+rVup+lpPJeK2GxnL5y8PqulxQ2PFRGLbr8+YQjgwx75XUrSasKDM+hiEBzRlRvOPpmoz7S3IOq7db66ip0tT6mfk5LiwEkZ0xqbcrvUL0IYO3inYYAh9b5wh5cjkGeA3LM8VM28iAwYsS6DYhWWmMulPytDYLiMz5W0TNzgFvdr/+KCzKmZb2qDQx5d89s0mStGn90dTH8njMKFCUJRA2r2f1Ll8YPMBGfSGrkxLVbCeQeihV5n2rIhVqMjjdcakhSZ3mWYnGe2A5I+6XTDzoMwXFha7u/G9sqb2xYeJPbmG0T3v6KNusOOUBT+a7ey/QF3468Jx8lYyhZpAu7DTZNxAxwcbLezmuF6OOZSwQvyHtaTIIiSj7sEizKe+t4fd3ASAD2goV4Fw8dE3WWG9sp7o6AeRR0KDD8t7xP2rsC6E8TEtKdxZu3psPK2FngYjqSq66k0iM=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPNm1pjg9t9CJfmT0bnXoPYG0z0isTeEkpNzxnPssnSW
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2jFM3xWekc+nho2gXsRWdpWsBzZf6lb+3SQxEfMcrdknDu/3cFRgDHqjPffJ80emrbJLfOU0WcE89MRm3Pq28=
                                             create=True mode=0644 path=/tmp/ansible.j036_dp1 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:30 compute-0 sudo[68830]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:30 compute-0 sudo[68982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfhynuequfeshphgsohpaclmyagilgaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281570.4350502-133-90987171842495/AnsiballZ_command.py'
Dec 09 11:59:30 compute-0 sudo[68982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:31 compute-0 python3.9[68984]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.j036_dp1' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:59:31 compute-0 sudo[68982]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:31 compute-0 sudo[69136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feojdzlhmvdscmvmzozvaqrcrfqagfbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281571.330135-157-29222084046567/AnsiballZ_file.py'
Dec 09 11:59:31 compute-0 sudo[69136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:32 compute-0 python3.9[69138]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.j036_dp1 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:32 compute-0 sudo[69136]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:32 compute-0 sshd-session[68222]: Connection closed by 192.168.122.30 port 57228
Dec 09 11:59:32 compute-0 sshd-session[68219]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:59:32 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 09 11:59:32 compute-0 systemd[1]: session-15.scope: Consumed 3.700s CPU time.
Dec 09 11:59:32 compute-0 systemd-logind[799]: Session 15 logged out. Waiting for processes to exit.
Dec 09 11:59:32 compute-0 systemd-logind[799]: Removed session 15.
Dec 09 11:59:38 compute-0 sshd-session[69163]: Accepted publickey for zuul from 192.168.122.30 port 32986 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:59:38 compute-0 systemd-logind[799]: New session 16 of user zuul.
Dec 09 11:59:38 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 09 11:59:38 compute-0 sshd-session[69163]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:59:39 compute-0 python3.9[69316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:59:40 compute-0 sudo[69470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxvgfoguahjzzrsboqihnqaazsikhwuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281580.2167814-56-46495521461961/AnsiballZ_systemd.py'
Dec 09 11:59:40 compute-0 sudo[69470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:41 compute-0 python3.9[69472]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 09 11:59:41 compute-0 sudo[69470]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:42 compute-0 sudo[69624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smfrgykwrswcndozvhodfzsqtphcsbnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281582.2770545-80-17812432224177/AnsiballZ_systemd.py'
Dec 09 11:59:42 compute-0 sudo[69624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:42 compute-0 python3.9[69626]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 11:59:42 compute-0 sudo[69624]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:43 compute-0 sudo[69777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtjogoxuxmpaehzlqdrnrcuxxlrbdfyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281583.1836667-107-155810302060373/AnsiballZ_command.py'
Dec 09 11:59:43 compute-0 sudo[69777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:43 compute-0 python3.9[69779]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:59:43 compute-0 sudo[69777]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:44 compute-0 sudo[69930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mohqawfqdvbfueisudcleetskvrngmur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281584.0329971-131-258098825255519/AnsiballZ_stat.py'
Dec 09 11:59:44 compute-0 sudo[69930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:44 compute-0 python3.9[69932]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 11:59:44 compute-0 sudo[69930]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:45 compute-0 sudo[70084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oprfmjahkzaucjshnsbvcxhyvnjmihhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281584.901665-155-137648050824777/AnsiballZ_command.py'
Dec 09 11:59:45 compute-0 sudo[70084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:45 compute-0 python3.9[70086]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:59:45 compute-0 sudo[70084]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:46 compute-0 sudo[70239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzivfunbadgaxeqdukdmfmfpghxmpvja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281585.6657133-179-264155490735002/AnsiballZ_file.py'
Dec 09 11:59:46 compute-0 sudo[70239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:46 compute-0 python3.9[70241]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 11:59:46 compute-0 sudo[70239]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:46 compute-0 sshd-session[69166]: Connection closed by 192.168.122.30 port 32986
Dec 09 11:59:46 compute-0 sshd-session[69163]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:59:46 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 09 11:59:46 compute-0 systemd[1]: session-16.scope: Consumed 4.929s CPU time.
Dec 09 11:59:46 compute-0 systemd-logind[799]: Session 16 logged out. Waiting for processes to exit.
Dec 09 11:59:46 compute-0 systemd-logind[799]: Removed session 16.
Dec 09 11:59:52 compute-0 sshd-session[70266]: Accepted publickey for zuul from 192.168.122.30 port 35772 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 11:59:52 compute-0 systemd-logind[799]: New session 17 of user zuul.
Dec 09 11:59:52 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 09 11:59:52 compute-0 sshd-session[70266]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:59:53 compute-0 python3.9[70419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 11:59:54 compute-0 sudo[70573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrbqvsbiqrgjetwpmyzunessmofjjmbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281594.3077111-62-20959556040033/AnsiballZ_setup.py'
Dec 09 11:59:54 compute-0 sudo[70573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:54 compute-0 python3.9[70575]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 11:59:55 compute-0 sudo[70573]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:55 compute-0 sudo[70657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwonutzlkaqeutnfbnewsfkaxghawsbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765281594.3077111-62-20959556040033/AnsiballZ_dnf.py'
Dec 09 11:59:55 compute-0 sudo[70657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:59:55 compute-0 python3.9[70659]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 11:59:57 compute-0 sudo[70657]: pam_unix(sudo:session): session closed for user root
Dec 09 11:59:58 compute-0 python3.9[70810]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:00:00 compute-0 python3.9[70961]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 12:00:01 compute-0 python3.9[71111]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 12:00:01 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 12:00:01 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 12:00:01 compute-0 python3.9[71262]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 12:00:02 compute-0 sshd-session[70269]: Connection closed by 192.168.122.30 port 35772
Dec 09 12:00:02 compute-0 sshd-session[70266]: pam_unix(sshd:session): session closed for user zuul
Dec 09 12:00:02 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 09 12:00:02 compute-0 systemd[1]: session-17.scope: Consumed 6.694s CPU time.
Dec 09 12:00:02 compute-0 systemd-logind[799]: Session 17 logged out. Waiting for processes to exit.
Dec 09 12:00:02 compute-0 systemd-logind[799]: Removed session 17.
Dec 09 12:00:11 compute-0 sshd-session[71287]: Accepted publickey for zuul from 38.102.83.236 port 38068 ssh2: RSA SHA256:6Ie4ZXK9Ek36UC2sJEF3TJKSrACzyJGKSwiteASgUXs
Dec 09 12:00:11 compute-0 systemd-logind[799]: New session 18 of user zuul.
Dec 09 12:00:11 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 09 12:00:11 compute-0 sshd-session[71287]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 12:00:11 compute-0 sudo[71363]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjpihvnumbqiyjbogsvbdmsgatpnqowi ; /usr/bin/python3'
Dec 09 12:00:11 compute-0 sudo[71363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:11 compute-0 useradd[71367]: new group: name=ceph-admin, GID=42478
Dec 09 12:00:11 compute-0 useradd[71367]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 09 12:00:12 compute-0 sudo[71363]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:12 compute-0 sudo[71449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsszridrxeqblmvbppsixsvpmmfkhqnw ; /usr/bin/python3'
Dec 09 12:00:12 compute-0 sudo[71449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:12 compute-0 sudo[71449]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:12 compute-0 sudo[71522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juynunujpttbhzgqdcuupyrzjjadojbp ; /usr/bin/python3'
Dec 09 12:00:12 compute-0 sudo[71522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:13 compute-0 sudo[71522]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:13 compute-0 sudo[71572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zioxoezeesoupvmdswjurqmlwvutnzyi ; /usr/bin/python3'
Dec 09 12:00:13 compute-0 sudo[71572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:13 compute-0 sudo[71572]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:13 compute-0 sudo[71598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etstdezzofjbitcwhvafygaoowbcjvhi ; /usr/bin/python3'
Dec 09 12:00:13 compute-0 sudo[71598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:14 compute-0 sudo[71598]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:14 compute-0 sudo[71624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evgceajygrvdohmkvrlonczlyhkyfbln ; /usr/bin/python3'
Dec 09 12:00:14 compute-0 sudo[71624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:14 compute-0 sudo[71624]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:14 compute-0 sudo[71650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxiugaxkweusenjpigrjzvwzlhgwbkok ; /usr/bin/python3'
Dec 09 12:00:14 compute-0 sudo[71650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:15 compute-0 sudo[71650]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:15 compute-0 sudo[71728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkbpyhbiimrvaciqmnygsqlbzcimfhl ; /usr/bin/python3'
Dec 09 12:00:15 compute-0 sudo[71728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:15 compute-0 sudo[71728]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:15 compute-0 sudo[71801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkdiheflzossljyaegqccvssaagyzydg ; /usr/bin/python3'
Dec 09 12:00:15 compute-0 sudo[71801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:15 compute-0 sudo[71801]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:16 compute-0 sudo[71903]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neqvfiwsgwkfqgnsydrjlonuehsyvjpk ; /usr/bin/python3'
Dec 09 12:00:16 compute-0 sudo[71903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:16 compute-0 sudo[71903]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:16 compute-0 sudo[71976]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbxpcgwkgkehsbvabipnxlmftjplmqpa ; /usr/bin/python3'
Dec 09 12:00:16 compute-0 sudo[71976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:16 compute-0 sudo[71976]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:17 compute-0 sudo[72026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxzvbhohouzmlgvryjojygbdkysepkfy ; /usr/bin/python3'
Dec 09 12:00:17 compute-0 sudo[72026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:17 compute-0 python3[72028]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 12:00:18 compute-0 sudo[72026]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:19 compute-0 sudo[72121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siiyrtqmfhftsetocdcppcqpicrftake ; /usr/bin/python3'
Dec 09 12:00:19 compute-0 sudo[72121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:19 compute-0 chronyd[58501]: Selected source 23.133.168.244 (pool.ntp.org)
Dec 09 12:00:19 compute-0 python3[72123]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 12:00:20 compute-0 sudo[72121]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:21 compute-0 sudo[72148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqxouewgoerywewtdpvyvqltazimpzib ; /usr/bin/python3'
Dec 09 12:00:21 compute-0 sudo[72148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:21 compute-0 python3[72150]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 12:00:21 compute-0 sudo[72148]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:21 compute-0 sudo[72174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zesqedvzogwzkdxgsndoxrybcriibevl ; /usr/bin/python3'
Dec 09 12:00:21 compute-0 sudo[72174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:21 compute-0 python3[72176]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:00:21 compute-0 kernel: loop: module loaded
Dec 09 12:00:21 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 09 12:00:21 compute-0 sudo[72174]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:22 compute-0 sudo[72209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjhrwoqscfpepobohoutpjzxesdsckui ; /usr/bin/python3'
Dec 09 12:00:22 compute-0 sudo[72209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:22 compute-0 python3[72211]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:00:22 compute-0 lvm[72214]: PV /dev/loop3 not used.
Dec 09 12:00:22 compute-0 lvm[72223]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:00:22 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 09 12:00:22 compute-0 sudo[72209]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:22 compute-0 lvm[72225]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 09 12:00:22 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 09 12:00:22 compute-0 sudo[72301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ophifgyywloleforaqdezjsyjmrtoutx ; /usr/bin/python3'
Dec 09 12:00:22 compute-0 sudo[72301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:23 compute-0 python3[72303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 12:00:23 compute-0 sudo[72301]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:23 compute-0 sudo[72374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyobtppkqzsjrmusohkedfqnxpwhtcsu ; /usr/bin/python3'
Dec 09 12:00:23 compute-0 sudo[72374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:23 compute-0 python3[72376]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765281622.776865-36830-146999397730376/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:00:23 compute-0 sudo[72374]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:24 compute-0 sudo[72424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwxsayyrqwqruaefwxdhvidotvnisvmd ; /usr/bin/python3'
Dec 09 12:00:24 compute-0 sudo[72424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:24 compute-0 python3[72426]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 12:00:24 compute-0 systemd[1]: Reloading.
Dec 09 12:00:24 compute-0 systemd-sysv-generator[72459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:00:24 compute-0 systemd-rc-local-generator[72451]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:00:24 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 09 12:00:24 compute-0 bash[72467]: /dev/loop3: [64513]:4327748 (/var/lib/ceph-osd-0.img)
Dec 09 12:00:24 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 09 12:00:24 compute-0 lvm[72468]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:00:24 compute-0 lvm[72468]: VG ceph_vg0 finished
Dec 09 12:00:24 compute-0 sudo[72424]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:27 compute-0 python3[72492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 12:00:30 compute-0 sudo[72583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pialkpjmknmjddolxmikroammagcocoy ; /usr/bin/python3'
Dec 09 12:00:30 compute-0 sudo[72583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:30 compute-0 python3[72585]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 12:00:33 compute-0 sudo[72583]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:33 compute-0 sudo[72640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkduwqotjsrbavgdmdtifxsyjtmseaxk ; /usr/bin/python3'
Dec 09 12:00:33 compute-0 sudo[72640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:33 compute-0 python3[72642]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 12:00:36 compute-0 groupadd[72652]: group added to /etc/group: name=cephadm, GID=992
Dec 09 12:00:36 compute-0 groupadd[72652]: group added to /etc/gshadow: name=cephadm
Dec 09 12:00:36 compute-0 groupadd[72652]: new group: name=cephadm, GID=992
Dec 09 12:00:36 compute-0 useradd[72659]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 09 12:00:36 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 12:00:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 12:00:37 compute-0 sudo[72640]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 12:00:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 12:00:37 compute-0 systemd[1]: run-r1c8bfedec95040cab04d199bea7c0d50.service: Deactivated successfully.
Dec 09 12:00:37 compute-0 sudo[72755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rafvpsohogrdfbyrymdwtnskkcwbazlh ; /usr/bin/python3'
Dec 09 12:00:37 compute-0 sudo[72755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:37 compute-0 python3[72757]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 12:00:37 compute-0 sudo[72755]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:38 compute-0 sudo[72783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzfclmjxppldqqzxpzopzsolfezvozy ; /usr/bin/python3'
Dec 09 12:00:38 compute-0 sudo[72783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:38 compute-0 python3[72785]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:00:38 compute-0 sudo[72783]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:39 compute-0 sudo[72846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyvffzpszylscpqaehpmaachtuuzxspe ; /usr/bin/python3'
Dec 09 12:00:39 compute-0 sudo[72846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:39 compute-0 python3[72848]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:00:39 compute-0 sudo[72846]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:39 compute-0 sudo[72872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caxptsrpltupmuwnvjwlgewwhejfnkzo ; /usr/bin/python3'
Dec 09 12:00:39 compute-0 sudo[72872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:00:39 compute-0 python3[72874]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:00:39 compute-0 sudo[72872]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:40 compute-0 sudo[72950]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blupxvnnxugottxxdcpesrfgivxpckyt ; /usr/bin/python3'
Dec 09 12:00:40 compute-0 sudo[72950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:40 compute-0 python3[72952]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 12:00:40 compute-0 sudo[72950]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:40 compute-0 sudo[73023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yerkwujilmnssimgdeqafewvggkjarie ; /usr/bin/python3'
Dec 09 12:00:40 compute-0 sudo[73023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:40 compute-0 python3[73025]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765281640.0293133-37022-164864712881902/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:00:40 compute-0 sudo[73023]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:41 compute-0 sudo[73125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axoxedaoajncrfrpowxcryiigrnckzai ; /usr/bin/python3'
Dec 09 12:00:41 compute-0 sudo[73125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:41 compute-0 python3[73127]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 12:00:41 compute-0 sudo[73125]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:41 compute-0 sudo[73198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmsokvymgyhwoycymwgbjfvshevntkex ; /usr/bin/python3'
Dec 09 12:00:41 compute-0 sudo[73198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:41 compute-0 python3[73200]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765281641.2274039-37040-71987605744006/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:00:41 compute-0 sudo[73198]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:42 compute-0 sudo[73248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogpgdessaufyxbekcbulzlvunqugtdg ; /usr/bin/python3'
Dec 09 12:00:42 compute-0 sudo[73248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:42 compute-0 python3[73250]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 12:00:42 compute-0 sudo[73248]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:42 compute-0 sudo[73276]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjfrxuktitjfgmqyhfqurfogdzpqmwda ; /usr/bin/python3'
Dec 09 12:00:42 compute-0 sudo[73276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:42 compute-0 python3[73278]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 12:00:42 compute-0 sudo[73276]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:42 compute-0 sudo[73304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsqnznnfuoaecbryrycmrzeywpsjyxsc ; /usr/bin/python3'
Dec 09 12:00:42 compute-0 sudo[73304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:42 compute-0 python3[73306]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 12:00:43 compute-0 sudo[73304]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:43 compute-0 sudo[73332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqgaqvuiggmvmrzxggrmueqdrtbqklws ; /usr/bin/python3'
Dec 09 12:00:43 compute-0 sudo[73332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:00:43 compute-0 python3[73334]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 750b57e3-924f-51a5-ab09-01517535f732 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:00:43 compute-0 sshd-session[73338]: Accepted publickey for ceph-admin from 192.168.122.100 port 49268 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:00:43 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 09 12:00:43 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 09 12:00:43 compute-0 systemd-logind[799]: New session 19 of user ceph-admin.
Dec 09 12:00:43 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 09 12:00:43 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 09 12:00:43 compute-0 systemd[73342]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:00:43 compute-0 systemd[73342]: Queued start job for default target Main User Target.
Dec 09 12:00:43 compute-0 systemd[73342]: Created slice User Application Slice.
Dec 09 12:00:43 compute-0 systemd[73342]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 09 12:00:43 compute-0 systemd[73342]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 12:00:43 compute-0 systemd[73342]: Reached target Paths.
Dec 09 12:00:43 compute-0 systemd[73342]: Reached target Timers.
Dec 09 12:00:43 compute-0 systemd[73342]: Starting D-Bus User Message Bus Socket...
Dec 09 12:00:43 compute-0 systemd[73342]: Starting Create User's Volatile Files and Directories...
Dec 09 12:00:43 compute-0 systemd[73342]: Listening on D-Bus User Message Bus Socket.
Dec 09 12:00:43 compute-0 systemd[73342]: Reached target Sockets.
Dec 09 12:00:43 compute-0 systemd[73342]: Finished Create User's Volatile Files and Directories.
Dec 09 12:00:43 compute-0 systemd[73342]: Reached target Basic System.
Dec 09 12:00:43 compute-0 systemd[73342]: Reached target Main User Target.
Dec 09 12:00:43 compute-0 systemd[73342]: Startup finished in 127ms.
Dec 09 12:00:43 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 09 12:00:43 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 09 12:00:43 compute-0 sshd-session[73338]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:00:43 compute-0 sudo[73358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 09 12:00:43 compute-0 sudo[73358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:00:43 compute-0 sudo[73358]: pam_unix(sudo:session): session closed for user root
Dec 09 12:00:43 compute-0 sshd-session[73357]: Received disconnect from 192.168.122.100 port 49268:11: disconnected by user
Dec 09 12:00:43 compute-0 sshd-session[73357]: Disconnected from user ceph-admin 192.168.122.100 port 49268
Dec 09 12:00:43 compute-0 sshd-session[73338]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:00:43 compute-0 systemd-logind[799]: Session 19 logged out. Waiting for processes to exit.
Dec 09 12:00:43 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 09 12:00:43 compute-0 systemd-logind[799]: Removed session 19.
Dec 09 12:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat308103233-merged.mount: Deactivated successfully.
Dec 09 12:00:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat308103233-lower\x2dmapped.mount: Deactivated successfully.
Dec 09 12:00:54 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 09 12:00:54 compute-0 systemd[73342]: Activating special unit Exit the Session...
Dec 09 12:00:54 compute-0 systemd[73342]: Stopped target Main User Target.
Dec 09 12:00:54 compute-0 systemd[73342]: Stopped target Basic System.
Dec 09 12:00:54 compute-0 systemd[73342]: Stopped target Paths.
Dec 09 12:00:54 compute-0 systemd[73342]: Stopped target Sockets.
Dec 09 12:00:54 compute-0 systemd[73342]: Stopped target Timers.
Dec 09 12:00:54 compute-0 systemd[73342]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 09 12:00:54 compute-0 systemd[73342]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 09 12:00:54 compute-0 systemd[73342]: Closed D-Bus User Message Bus Socket.
Dec 09 12:00:54 compute-0 systemd[73342]: Stopped Create User's Volatile Files and Directories.
Dec 09 12:00:54 compute-0 systemd[73342]: Removed slice User Application Slice.
Dec 09 12:00:54 compute-0 systemd[73342]: Reached target Shutdown.
Dec 09 12:00:54 compute-0 systemd[73342]: Finished Exit the Session.
Dec 09 12:00:54 compute-0 systemd[73342]: Reached target Exit the Session.
Dec 09 12:00:54 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 09 12:00:54 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 09 12:00:54 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 09 12:00:54 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 09 12:00:54 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 09 12:00:54 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 09 12:00:54 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 09 12:01:01 compute-0 CROND[73496]: (root) CMD (run-parts /etc/cron.hourly)
Dec 09 12:01:01 compute-0 run-parts[73499]: (/etc/cron.hourly) starting 0anacron
Dec 09 12:01:01 compute-0 anacron[73507]: Anacron started on 2025-12-09
Dec 09 12:01:01 compute-0 anacron[73507]: Will run job `cron.daily' in 23 min.
Dec 09 12:01:01 compute-0 anacron[73507]: Will run job `cron.weekly' in 43 min.
Dec 09 12:01:01 compute-0 anacron[73507]: Will run job `cron.monthly' in 63 min.
Dec 09 12:01:01 compute-0 anacron[73507]: Jobs will be executed sequentially
Dec 09 12:01:01 compute-0 run-parts[73509]: (/etc/cron.hourly) finished 0anacron
Dec 09 12:01:01 compute-0 CROND[73495]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 09 12:01:20 compute-0 podman[73434]: 2025-12-09 12:01:20.730112382 +0000 UTC m=+36.485928885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:01:20 compute-0 podman[73511]: 2025-12-09 12:01:20.807285416 +0000 UTC m=+0.042534342 container create 167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec (image=quay.io/ceph/ceph:v19, name=competent_jackson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:20 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 09 12:01:20 compute-0 systemd[1]: Started libpod-conmon-167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec.scope.
Dec 09 12:01:20 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:20 compute-0 podman[73511]: 2025-12-09 12:01:20.789222471 +0000 UTC m=+0.024471417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:20 compute-0 podman[73511]: 2025-12-09 12:01:20.915798592 +0000 UTC m=+0.151047538 container init 167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec (image=quay.io/ceph/ceph:v19, name=competent_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:20 compute-0 podman[73511]: 2025-12-09 12:01:20.924924547 +0000 UTC m=+0.160173473 container start 167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec (image=quay.io/ceph/ceph:v19, name=competent_jackson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:20 compute-0 podman[73511]: 2025-12-09 12:01:20.929079568 +0000 UTC m=+0.164328514 container attach 167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec (image=quay.io/ceph/ceph:v19, name=competent_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 09 12:01:21 compute-0 competent_jackson[73528]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 09 12:01:21 compute-0 systemd[1]: libpod-167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 conmon[73528]: conmon 167b72865daba7ff059e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec.scope/container/memory.events
Dec 09 12:01:21 compute-0 podman[73511]: 2025-12-09 12:01:21.038869983 +0000 UTC m=+0.274118909 container died 167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec (image=quay.io/ceph/ceph:v19, name=competent_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 12:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-028fc2dfd2c75fbf8a8b0aac2763190bcb7d3d3c613cf3adc9d641898b65cf16-merged.mount: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73511]: 2025-12-09 12:01:21.071442443 +0000 UTC m=+0.306691369 container remove 167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec (image=quay.io/ceph/ceph:v19, name=competent_jackson, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 09 12:01:21 compute-0 systemd[1]: libpod-conmon-167b72865daba7ff059ede4a79b1daed86b4260f758c49d7618463c658a2e8ec.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73544]: 2025-12-09 12:01:21.127255909 +0000 UTC m=+0.036472752 container create 15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c (image=quay.io/ceph/ceph:v19, name=brave_haibt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 09 12:01:21 compute-0 systemd[1]: Started libpod-conmon-15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c.scope.
Dec 09 12:01:21 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:21 compute-0 podman[73544]: 2025-12-09 12:01:21.111193247 +0000 UTC m=+0.020410110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:21 compute-0 podman[73544]: 2025-12-09 12:01:21.209296116 +0000 UTC m=+0.118512959 container init 15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c (image=quay.io/ceph/ceph:v19, name=brave_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:21 compute-0 podman[73544]: 2025-12-09 12:01:21.216261344 +0000 UTC m=+0.125478187 container start 15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c (image=quay.io/ceph/ceph:v19, name=brave_haibt, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:21 compute-0 podman[73544]: 2025-12-09 12:01:21.219369862 +0000 UTC m=+0.128586705 container attach 15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c (image=quay.io/ceph/ceph:v19, name=brave_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:21 compute-0 brave_haibt[73560]: 167 167
Dec 09 12:01:21 compute-0 systemd[1]: libpod-15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73544]: 2025-12-09 12:01:21.222845781 +0000 UTC m=+0.132062624 container died 15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c (image=quay.io/ceph/ceph:v19, name=brave_haibt, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:21 compute-0 podman[73544]: 2025-12-09 12:01:21.255813272 +0000 UTC m=+0.165030115 container remove 15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c (image=quay.io/ceph/ceph:v19, name=brave_haibt, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:21 compute-0 systemd[1]: libpod-conmon-15976c81e6da38232ae91f985386a14b9c092320ffe3b8f87f49dec45ffa458c.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73576]: 2025-12-09 12:01:21.312437984 +0000 UTC m=+0.034761299 container create d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828 (image=quay.io/ceph/ceph:v19, name=adoring_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:21 compute-0 systemd[1]: Started libpod-conmon-d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828.scope.
Dec 09 12:01:21 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:21 compute-0 podman[73576]: 2025-12-09 12:01:21.2963344 +0000 UTC m=+0.018657745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:21 compute-0 podman[73576]: 2025-12-09 12:01:21.452981821 +0000 UTC m=+0.175305176 container init d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828 (image=quay.io/ceph/ceph:v19, name=adoring_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 09 12:01:21 compute-0 podman[73576]: 2025-12-09 12:01:21.460500507 +0000 UTC m=+0.182823832 container start d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828 (image=quay.io/ceph/ceph:v19, name=adoring_rosalind, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:21 compute-0 podman[73576]: 2025-12-09 12:01:21.464021597 +0000 UTC m=+0.186344952 container attach d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828 (image=quay.io/ceph/ceph:v19, name=adoring_rosalind, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Dec 09 12:01:21 compute-0 adoring_rosalind[73592]: AQCRDzhpmGeYHBAAcqSZ9/oYwk0/9FY5M8mWQQ==
Dec 09 12:01:21 compute-0 systemd[1]: libpod-d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73576]: 2025-12-09 12:01:21.483214338 +0000 UTC m=+0.205537663 container died d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828 (image=quay.io/ceph/ceph:v19, name=adoring_rosalind, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:21 compute-0 podman[73576]: 2025-12-09 12:01:21.512479924 +0000 UTC m=+0.234803249 container remove d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828 (image=quay.io/ceph/ceph:v19, name=adoring_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:21 compute-0 systemd[1]: libpod-conmon-d3436b3b00908b3400c0bd166c31c9272017c370d5412ec18ff6652852a53828.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73613]: 2025-12-09 12:01:21.578624104 +0000 UTC m=+0.046655302 container create 28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186 (image=quay.io/ceph/ceph:v19, name=affectionate_matsumoto, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:21 compute-0 systemd[1]: Started libpod-conmon-28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186.scope.
Dec 09 12:01:21 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:21 compute-0 podman[73613]: 2025-12-09 12:01:21.64305942 +0000 UTC m=+0.111090628 container init 28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186 (image=quay.io/ceph/ceph:v19, name=affectionate_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:21 compute-0 podman[73613]: 2025-12-09 12:01:21.647966474 +0000 UTC m=+0.115997662 container start 28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186 (image=quay.io/ceph/ceph:v19, name=affectionate_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:21 compute-0 podman[73613]: 2025-12-09 12:01:21.651163314 +0000 UTC m=+0.119194512 container attach 28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186 (image=quay.io/ceph/ceph:v19, name=affectionate_matsumoto, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:21 compute-0 podman[73613]: 2025-12-09 12:01:21.5577348 +0000 UTC m=+0.025766018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:21 compute-0 affectionate_matsumoto[73630]: AQCRDzhp+MjsJxAAZrHnSyUYSc4SECQHUHAXFA==
Dec 09 12:01:21 compute-0 systemd[1]: libpod-28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73613]: 2025-12-09 12:01:21.673770781 +0000 UTC m=+0.141801979 container died 28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186 (image=quay.io/ceph/ceph:v19, name=affectionate_matsumoto, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 12:01:21 compute-0 podman[73613]: 2025-12-09 12:01:21.713484653 +0000 UTC m=+0.181515851 container remove 28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186 (image=quay.io/ceph/ceph:v19, name=affectionate_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:21 compute-0 systemd[1]: libpod-conmon-28ce2e175cdc2ba384a32bca62e9502581929263d0efcd519765d12b853ef186.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73650]: 2025-12-09 12:01:21.775957629 +0000 UTC m=+0.043425350 container create e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132 (image=quay.io/ceph/ceph:v19, name=boring_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:21 compute-0 systemd[1]: Started libpod-conmon-e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132.scope.
Dec 09 12:01:21 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:21 compute-0 podman[73650]: 2025-12-09 12:01:21.842357436 +0000 UTC m=+0.109825167 container init e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132 (image=quay.io/ceph/ceph:v19, name=boring_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:21 compute-0 podman[73650]: 2025-12-09 12:01:21.754661123 +0000 UTC m=+0.022128864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:21 compute-0 podman[73650]: 2025-12-09 12:01:21.85396764 +0000 UTC m=+0.121435361 container start e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132 (image=quay.io/ceph/ceph:v19, name=boring_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 09 12:01:21 compute-0 podman[73650]: 2025-12-09 12:01:21.857309444 +0000 UTC m=+0.124777275 container attach e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132 (image=quay.io/ceph/ceph:v19, name=boring_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 09 12:01:21 compute-0 boring_antonelli[73666]: AQCRDzhp0EXvNBAAi4ZascJKnjytlvcjOc3bsg==
Dec 09 12:01:21 compute-0 systemd[1]: libpod-e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132.scope: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73650]: 2025-12-09 12:01:21.892753073 +0000 UTC m=+0.160220814 container died e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132 (image=quay.io/ceph/ceph:v19, name=boring_antonelli, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bda8a6142c79dd9b33a97bf260a2bf03ca3928472d768af6f1e74baf58f0fd7-merged.mount: Deactivated successfully.
Dec 09 12:01:21 compute-0 podman[73650]: 2025-12-09 12:01:21.933243861 +0000 UTC m=+0.200711582 container remove e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132 (image=quay.io/ceph/ceph:v19, name=boring_antonelli, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:01:21 compute-0 systemd[1]: libpod-conmon-e2ed6e2b4274585fc5f77b00344ae48cd0281c8368ee21025761bf5508383132.scope: Deactivated successfully.
Dec 09 12:01:22 compute-0 podman[73683]: 2025-12-09 12:01:22.000832316 +0000 UTC m=+0.043751561 container create 6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c (image=quay.io/ceph/ceph:v19, name=sweet_edison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 09 12:01:22 compute-0 systemd[1]: Started libpod-conmon-6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c.scope.
Dec 09 12:01:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b75c3ebf608f1deb776579c901fb5b1c3ce090b7154d90b45514d5f7bb66c4f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:22 compute-0 podman[73683]: 2025-12-09 12:01:22.063750844 +0000 UTC m=+0.106670059 container init 6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c (image=quay.io/ceph/ceph:v19, name=sweet_edison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 09 12:01:22 compute-0 podman[73683]: 2025-12-09 12:01:22.069956159 +0000 UTC m=+0.112875374 container start 6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c (image=quay.io/ceph/ceph:v19, name=sweet_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 12:01:22 compute-0 podman[73683]: 2025-12-09 12:01:22.073786029 +0000 UTC m=+0.116705274 container attach 6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c (image=quay.io/ceph/ceph:v19, name=sweet_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 09 12:01:22 compute-0 podman[73683]: 2025-12-09 12:01:21.979881 +0000 UTC m=+0.022800225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:22 compute-0 sweet_edison[73700]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 09 12:01:22 compute-0 sweet_edison[73700]: setting min_mon_release = quincy
Dec 09 12:01:22 compute-0 sweet_edison[73700]: /usr/bin/monmaptool: set fsid to 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:22 compute-0 sweet_edison[73700]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 09 12:01:22 compute-0 systemd[1]: libpod-6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c.scope: Deactivated successfully.
Dec 09 12:01:22 compute-0 podman[73707]: 2025-12-09 12:01:22.249495427 +0000 UTC m=+0.026019406 container died 6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c (image=quay.io/ceph/ceph:v19, name=sweet_edison, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 12:01:22 compute-0 podman[73707]: 2025-12-09 12:01:22.282067336 +0000 UTC m=+0.058591285 container remove 6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c (image=quay.io/ceph/ceph:v19, name=sweet_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 09 12:01:22 compute-0 systemd[1]: libpod-conmon-6e85359c363dcfd4017e61a18c870c11294e5f1ffc1e48e7d3cdb2749ffe651c.scope: Deactivated successfully.
Dec 09 12:01:22 compute-0 podman[73722]: 2025-12-09 12:01:22.34832103 +0000 UTC m=+0.039850128 container create 0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b (image=quay.io/ceph/ceph:v19, name=reverent_ishizaka, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:22 compute-0 systemd[1]: Started libpod-conmon-0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b.scope.
Dec 09 12:01:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b912e76068ff89c3e843d1c14715258f2a4d4f96ab622f974bf648cadae5ee22/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b912e76068ff89c3e843d1c14715258f2a4d4f96ab622f974bf648cadae5ee22/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b912e76068ff89c3e843d1c14715258f2a4d4f96ab622f974bf648cadae5ee22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b912e76068ff89c3e843d1c14715258f2a4d4f96ab622f974bf648cadae5ee22/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:22 compute-0 podman[73722]: 2025-12-09 12:01:22.409295078 +0000 UTC m=+0.100824186 container init 0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b (image=quay.io/ceph/ceph:v19, name=reverent_ishizaka, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 12:01:22 compute-0 podman[73722]: 2025-12-09 12:01:22.415155391 +0000 UTC m=+0.106684489 container start 0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b (image=quay.io/ceph/ceph:v19, name=reverent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:22 compute-0 podman[73722]: 2025-12-09 12:01:22.41866346 +0000 UTC m=+0.110192558 container attach 0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b (image=quay.io/ceph/ceph:v19, name=reverent_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:22 compute-0 podman[73722]: 2025-12-09 12:01:22.330351027 +0000 UTC m=+0.021880145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:22 compute-0 systemd[1]: libpod-0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b.scope: Deactivated successfully.
Dec 09 12:01:22 compute-0 podman[73722]: 2025-12-09 12:01:22.502248307 +0000 UTC m=+0.193777405 container died 0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b (image=quay.io/ceph/ceph:v19, name=reverent_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 09 12:01:22 compute-0 podman[73722]: 2025-12-09 12:01:22.542810965 +0000 UTC m=+0.234340053 container remove 0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b (image=quay.io/ceph/ceph:v19, name=reverent_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 09 12:01:22 compute-0 systemd[1]: libpod-conmon-0c47da167e22446d98a57a666f92943692b4e595e27445b3ee127e42df4b0a9b.scope: Deactivated successfully.
Dec 09 12:01:22 compute-0 systemd[1]: Reloading.
Dec 09 12:01:22 compute-0 systemd-rc-local-generator[73805]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:01:22 compute-0 systemd-sysv-generator[73809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:01:22 compute-0 systemd[1]: Reloading.
Dec 09 12:01:22 compute-0 systemd-rc-local-generator[73846]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:01:22 compute-0 systemd-sysv-generator[73850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:01:23 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 09 12:01:23 compute-0 systemd[1]: Reloading.
Dec 09 12:01:23 compute-0 systemd-sysv-generator[73887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:01:23 compute-0 systemd-rc-local-generator[73884]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:01:23 compute-0 systemd[1]: Reached target Ceph cluster 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:01:23 compute-0 systemd[1]: Reloading.
Dec 09 12:01:23 compute-0 systemd-rc-local-generator[73922]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:01:23 compute-0 systemd-sysv-generator[73926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:01:23 compute-0 systemd[1]: Reloading.
Dec 09 12:01:23 compute-0 systemd-rc-local-generator[73962]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:01:23 compute-0 systemd-sysv-generator[73966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:01:23 compute-0 systemd[1]: Created slice Slice /system/ceph-750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:01:23 compute-0 systemd[1]: Reached target System Time Set.
Dec 09 12:01:23 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 09 12:01:23 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:01:24 compute-0 podman[74017]: 2025-12-09 12:01:24.188530574 +0000 UTC m=+0.041352584 container create ef819e40ba8f85031c12e23fd0ae38e4b980e52d386edb6929d9d712f54aaa08 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4a579930a60d2564e16cbf5b90c82b6c4316f14a01ce50d230d877af5673e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4a579930a60d2564e16cbf5b90c82b6c4316f14a01ce50d230d877af5673e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4a579930a60d2564e16cbf5b90c82b6c4316f14a01ce50d230d877af5673e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4a579930a60d2564e16cbf5b90c82b6c4316f14a01ce50d230d877af5673e7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 podman[74017]: 2025-12-09 12:01:24.250196841 +0000 UTC m=+0.103018861 container init ef819e40ba8f85031c12e23fd0ae38e4b980e52d386edb6929d9d712f54aaa08 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 12:01:24 compute-0 podman[74017]: 2025-12-09 12:01:24.256139079 +0000 UTC m=+0.108961069 container start ef819e40ba8f85031c12e23fd0ae38e4b980e52d386edb6929d9d712f54aaa08 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:24 compute-0 bash[74017]: ef819e40ba8f85031c12e23fd0ae38e4b980e52d386edb6929d9d712f54aaa08
Dec 09 12:01:24 compute-0 podman[74017]: 2025-12-09 12:01:24.169561421 +0000 UTC m=+0.022383441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:24 compute-0 systemd[1]: Started Ceph mon.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:01:24 compute-0 ceph-mon[74036]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: pidfile_write: ignore empty --pid-file
Dec 09 12:01:24 compute-0 ceph-mon[74036]: load: jerasure load: lrc 
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: RocksDB version: 7.9.2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Git sha 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: DB SUMMARY
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: DB Session ID:  E8J30FLPWKCT03AIV3DX
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: CURRENT file:  CURRENT
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                         Options.error_if_exists: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                       Options.create_if_missing: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                                     Options.env: 0x558adbb06c20
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                                Options.info_log: 0x558add996d60
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                              Options.statistics: (nil)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                               Options.use_fsync: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                              Options.db_log_dir: 
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                                 Options.wal_dir: 
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                    Options.write_buffer_manager: 0x558add99b900
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.unordered_write: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                               Options.row_cache: None
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                              Options.wal_filter: None
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.two_write_queues: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.wal_compression: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.atomic_flush: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.max_background_jobs: 2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.max_background_compactions: -1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.max_subcompactions: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.max_total_wal_size: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                          Options.max_open_files: -1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:       Options.compaction_readahead_size: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Compression algorithms supported:
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         kZSTD supported: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         kXpressCompression supported: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         kBZip2Compression supported: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         kLZ4Compression supported: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         kZlibCompression supported: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         kSnappyCompression supported: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:           Options.merge_operator: 
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:        Options.compaction_filter: None
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558add996500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558add9bb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:        Options.write_buffer_size: 33554432
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:  Options.max_write_buffer_number: 2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:          Options.compression: NoCompression
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.num_levels: 7
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7deabc88-086a-4c6c-86a8-ea775f31f9d7
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281684306021, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281684307992, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765281684, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7deabc88-086a-4c6c-86a8-ea775f31f9d7", "db_session_id": "E8J30FLPWKCT03AIV3DX", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281684308094, "job": 1, "event": "recovery_finished"}
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558add9bce00
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: DB pointer 0x558addac6000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 12:01:24 compute-0 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558add9bb350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 09 12:01:24 compute-0 ceph-mon[74036]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@-1(???) e0 preinit fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 09 12:01:24 compute-0 ceph-mon[74036]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 09 12:01:24 compute-0 podman[74037]: 2025-12-09 12:01:24.34933552 +0000 UTC m=+0.056835498 container create e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626 (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 09 12:01:24 compute-0 ceph-mon[74036]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : last_changed 2025-12-09T12:01:22.103720+0000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : created 2025-12-09T12:01:22.103720+0000
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,os=Linux}
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).mds e1 new map
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-09T12:01:24:354878+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : fsmap 
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mkfs 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 12:01:24 compute-0 systemd[1]: Started libpod-conmon-e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626.scope.
Dec 09 12:01:24 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:24 compute-0 podman[74037]: 2025-12-09 12:01:24.334157098 +0000 UTC m=+0.041657096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e78c4616032ce69cc0e9a9d722492e90c98b01c788e1323935b38272cf41bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e78c4616032ce69cc0e9a9d722492e90c98b01c788e1323935b38272cf41bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e78c4616032ce69cc0e9a9d722492e90c98b01c788e1323935b38272cf41bb/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 podman[74037]: 2025-12-09 12:01:24.428060459 +0000 UTC m=+0.135560457 container init e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626 (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:24 compute-0 podman[74037]: 2025-12-09 12:01:24.436834557 +0000 UTC m=+0.144334555 container start e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626 (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 09 12:01:24 compute-0 podman[74037]: 2025-12-09 12:01:24.440313684 +0000 UTC m=+0.147813672 container attach e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626 (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 09 12:01:24 compute-0 ceph-mon[74036]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793211140' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:   cluster:
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     id:     750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     health: HEALTH_OK
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:  
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:   services:
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     mon: 1 daemons, quorum compute-0 (age 0.289919s)
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     mgr: no daemons active
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     osd: 0 osds: 0 up, 0 in
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:  
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:   data:
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     pools:   0 pools, 0 pgs
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     objects: 0 objects, 0 B
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     usage:   0 B used, 0 B / 0 B avail
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:     pgs:     
Dec 09 12:01:24 compute-0 peaceful_ardinghelli[74091]:  
Dec 09 12:01:24 compute-0 systemd[1]: libpod-e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626.scope: Deactivated successfully.
Dec 09 12:01:24 compute-0 podman[74117]: 2025-12-09 12:01:24.700534466 +0000 UTC m=+0.025332959 container died e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626 (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:24 compute-0 podman[74117]: 2025-12-09 12:01:24.735493211 +0000 UTC m=+0.060291674 container remove e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626 (image=quay.io/ceph/ceph:v19, name=peaceful_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:24 compute-0 systemd[1]: libpod-conmon-e7a11a9cbbf9070efca0558ee3ac57f0ffc2b3cd1924e0165bad63c3ab055626.scope: Deactivated successfully.
Dec 09 12:01:24 compute-0 podman[74132]: 2025-12-09 12:01:24.814484256 +0000 UTC m=+0.046951228 container create 41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3 (image=quay.io/ceph/ceph:v19, name=exciting_noyce, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:24 compute-0 systemd[1]: Started libpod-conmon-41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3.scope.
Dec 09 12:01:24 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e503e639dce7ad5f18bf8bb6188ccd785d6bb87f52d52ba7b96552e088fe3384/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e503e639dce7ad5f18bf8bb6188ccd785d6bb87f52d52ba7b96552e088fe3384/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e503e639dce7ad5f18bf8bb6188ccd785d6bb87f52d52ba7b96552e088fe3384/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e503e639dce7ad5f18bf8bb6188ccd785d6bb87f52d52ba7b96552e088fe3384/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:24 compute-0 podman[74132]: 2025-12-09 12:01:24.881238895 +0000 UTC m=+0.113705887 container init 41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3 (image=quay.io/ceph/ceph:v19, name=exciting_noyce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:24 compute-0 podman[74132]: 2025-12-09 12:01:24.887180282 +0000 UTC m=+0.119647244 container start 41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3 (image=quay.io/ceph/ceph:v19, name=exciting_noyce, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:24 compute-0 podman[74132]: 2025-12-09 12:01:24.890911534 +0000 UTC m=+0.123378526 container attach 41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3 (image=quay.io/ceph/ceph:v19, name=exciting_noyce, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 12:01:24 compute-0 podman[74132]: 2025-12-09 12:01:24.798273955 +0000 UTC m=+0.030740957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:25 compute-0 ceph-mon[74036]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 09 12:01:25 compute-0 ceph-mon[74036]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3263934701' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 09 12:01:25 compute-0 ceph-mon[74036]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3263934701' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 09 12:01:25 compute-0 exciting_noyce[74148]: 
Dec 09 12:01:25 compute-0 exciting_noyce[74148]: [global]
Dec 09 12:01:25 compute-0 exciting_noyce[74148]:         fsid = 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:25 compute-0 exciting_noyce[74148]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 09 12:01:25 compute-0 systemd[1]: libpod-41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3.scope: Deactivated successfully.
Dec 09 12:01:25 compute-0 podman[74132]: 2025-12-09 12:01:25.106704674 +0000 UTC m=+0.339171646 container died 41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3 (image=quay.io/ceph/ceph:v19, name=exciting_noyce, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 12:01:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e503e639dce7ad5f18bf8bb6188ccd785d6bb87f52d52ba7b96552e088fe3384-merged.mount: Deactivated successfully.
Dec 09 12:01:25 compute-0 podman[74132]: 2025-12-09 12:01:25.139154717 +0000 UTC m=+0.371621689 container remove 41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3 (image=quay.io/ceph/ceph:v19, name=exciting_noyce, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 09 12:01:25 compute-0 systemd[1]: libpod-conmon-41d98760aad0e4bc074312f936a5ee50339195be941b006c3c3d36b0d1f2e6b3.scope: Deactivated successfully.
Dec 09 12:01:25 compute-0 podman[74185]: 2025-12-09 12:01:25.192493975 +0000 UTC m=+0.034860711 container create 86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39 (image=quay.io/ceph/ceph:v19, name=festive_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:25 compute-0 systemd[1]: Started libpod-conmon-86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39.scope.
Dec 09 12:01:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e62bb0e76658d0b1c93a38f7ca0bd794f22e39bc46bb606f4e815dd74cc554d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e62bb0e76658d0b1c93a38f7ca0bd794f22e39bc46bb606f4e815dd74cc554d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e62bb0e76658d0b1c93a38f7ca0bd794f22e39bc46bb606f4e815dd74cc554d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e62bb0e76658d0b1c93a38f7ca0bd794f22e39bc46bb606f4e815dd74cc554d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:25 compute-0 podman[74185]: 2025-12-09 12:01:25.259188421 +0000 UTC m=+0.101555187 container init 86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39 (image=quay.io/ceph/ceph:v19, name=festive_elbakyan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 09 12:01:25 compute-0 podman[74185]: 2025-12-09 12:01:25.264692163 +0000 UTC m=+0.107058899 container start 86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39 (image=quay.io/ceph/ceph:v19, name=festive_elbakyan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:25 compute-0 podman[74185]: 2025-12-09 12:01:25.267870334 +0000 UTC m=+0.110237070 container attach 86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39 (image=quay.io/ceph/ceph:v19, name=festive_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:25 compute-0 podman[74185]: 2025-12-09 12:01:25.176848586 +0000 UTC m=+0.019215352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:25 compute-0 ceph-mon[74036]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 12:01:25 compute-0 ceph-mon[74036]: monmap epoch 1
Dec 09 12:01:25 compute-0 ceph-mon[74036]: fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:25 compute-0 ceph-mon[74036]: last_changed 2025-12-09T12:01:22.103720+0000
Dec 09 12:01:25 compute-0 ceph-mon[74036]: created 2025-12-09T12:01:22.103720+0000
Dec 09 12:01:25 compute-0 ceph-mon[74036]: min_mon_release 19 (squid)
Dec 09 12:01:25 compute-0 ceph-mon[74036]: election_strategy: 1
Dec 09 12:01:25 compute-0 ceph-mon[74036]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 12:01:25 compute-0 ceph-mon[74036]: fsmap 
Dec 09 12:01:25 compute-0 ceph-mon[74036]: osdmap e1: 0 total, 0 up, 0 in
Dec 09 12:01:25 compute-0 ceph-mon[74036]: mgrmap e1: no daemons active
Dec 09 12:01:25 compute-0 ceph-mon[74036]: from='client.? 192.168.122.100:0/3793211140' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 09 12:01:25 compute-0 ceph-mon[74036]: from='client.? 192.168.122.100:0/3263934701' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 09 12:01:25 compute-0 ceph-mon[74036]: from='client.? 192.168.122.100:0/3263934701' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 09 12:01:25 compute-0 ceph-mon[74036]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:01:25 compute-0 ceph-mon[74036]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870730740' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:01:25 compute-0 systemd[1]: libpod-86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39.scope: Deactivated successfully.
Dec 09 12:01:25 compute-0 podman[74185]: 2025-12-09 12:01:25.477151514 +0000 UTC m=+0.319518250 container died 86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39 (image=quay.io/ceph/ceph:v19, name=festive_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 12:01:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e62bb0e76658d0b1c93a38f7ca0bd794f22e39bc46bb606f4e815dd74cc554d-merged.mount: Deactivated successfully.
Dec 09 12:01:25 compute-0 podman[74185]: 2025-12-09 12:01:25.509706032 +0000 UTC m=+0.352072778 container remove 86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39 (image=quay.io/ceph/ceph:v19, name=festive_elbakyan, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 12:01:25 compute-0 systemd[1]: libpod-conmon-86222cb74e6cc715b09cc78efd88666ea2877c39e149decc50eef685f5982a39.scope: Deactivated successfully.
Dec 09 12:01:25 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:01:25 compute-0 ceph-mon[74036]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 09 12:01:25 compute-0 ceph-mon[74036]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 09 12:01:25 compute-0 ceph-mon[74036]: mon.compute-0@0(leader) e1 shutdown
Dec 09 12:01:25 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0[74032]: 2025-12-09T12:01:25.702+0000 7ff7c15e1640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 09 12:01:25 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0[74032]: 2025-12-09T12:01:25.702+0000 7ff7c15e1640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 09 12:01:25 compute-0 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 09 12:01:25 compute-0 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 09 12:01:25 compute-0 podman[74270]: 2025-12-09 12:01:25.815710353 +0000 UTC m=+0.151259237 container died ef819e40ba8f85031c12e23fd0ae38e4b980e52d386edb6929d9d712f54aaa08 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f4a579930a60d2564e16cbf5b90c82b6c4316f14a01ce50d230d877af5673e7-merged.mount: Deactivated successfully.
Dec 09 12:01:25 compute-0 podman[74270]: 2025-12-09 12:01:25.860171497 +0000 UTC m=+0.195720331 container remove ef819e40ba8f85031c12e23fd0ae38e4b980e52d386edb6929d9d712f54aaa08 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:25 compute-0 bash[74270]: ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0
Dec 09 12:01:25 compute-0 systemd[1]: ceph-750b57e3-924f-51a5-ab09-01517535f732@mon.compute-0.service: Deactivated successfully.
Dec 09 12:01:25 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:01:25 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:01:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:01:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:01:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 12:01:26 compute-0 podman[74369]: 2025-12-09 12:01:26.224493919 +0000 UTC m=+0.043273047 container create a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2644e835c50da3949725f1f3076dfee27f0dfd0b40674f22563c8d5528ce11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2644e835c50da3949725f1f3076dfee27f0dfd0b40674f22563c8d5528ce11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2644e835c50da3949725f1f3076dfee27f0dfd0b40674f22563c8d5528ce11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2644e835c50da3949725f1f3076dfee27f0dfd0b40674f22563c8d5528ce11/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 podman[74369]: 2025-12-09 12:01:26.277096725 +0000 UTC m=+0.095875883 container init a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:26 compute-0 podman[74369]: 2025-12-09 12:01:26.283755254 +0000 UTC m=+0.102534382 container start a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec 09 12:01:26 compute-0 bash[74369]: a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53
Dec 09 12:01:26 compute-0 podman[74369]: 2025-12-09 12:01:26.20634406 +0000 UTC m=+0.025123208 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:26 compute-0 systemd[1]: Started Ceph mon.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:01:26 compute-0 ceph-mon[74388]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 09 12:01:26 compute-0 ceph-mon[74388]: pidfile_write: ignore empty --pid-file
Dec 09 12:01:26 compute-0 ceph-mon[74388]: load: jerasure load: lrc 
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: RocksDB version: 7.9.2
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Git sha 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: DB SUMMARY
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: DB Session ID:  SCWSPBMNCZE2SA66CLWT
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: CURRENT file:  CURRENT
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58741 ; 
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                         Options.error_if_exists: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                       Options.create_if_missing: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                                     Options.env: 0x556854714c20
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                                Options.info_log: 0x556855519ac0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                              Options.statistics: (nil)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                               Options.use_fsync: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                              Options.db_log_dir: 
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                                 Options.wal_dir: 
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                    Options.write_buffer_manager: 0x55685551d900
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.unordered_write: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                               Options.row_cache: None
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                              Options.wal_filter: None
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.two_write_queues: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.wal_compression: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.atomic_flush: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.max_background_jobs: 2
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.max_background_compactions: -1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.max_subcompactions: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.max_total_wal_size: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                          Options.max_open_files: -1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:       Options.compaction_readahead_size: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Compression algorithms supported:
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         kZSTD supported: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         kXpressCompression supported: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         kBZip2Compression supported: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         kLZ4Compression supported: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         kZlibCompression supported: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         kSnappyCompression supported: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:           Options.merge_operator: 
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:        Options.compaction_filter: None
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556855518aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685553d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:        Options.write_buffer_size: 33554432
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:  Options.max_write_buffer_number: 2
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:          Options.compression: NoCompression
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.num_levels: 7
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7deabc88-086a-4c6c-86a8-ea775f31f9d7
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281686323277, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281686327017, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56966, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54483, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765281686, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7deabc88-086a-4c6c-86a8-ea775f31f9d7", "db_session_id": "SCWSPBMNCZE2SA66CLWT", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281686327122, "job": 1, "event": "recovery_finished"}
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55685553ee00
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: DB pointer 0x556855648000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 12:01:26 compute-0 ceph-mon[74388]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 4.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 4.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685553d350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 09 12:01:26 compute-0 ceph-mon[74388]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(???) e1 preinit fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(???).mds e1 new map
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-09T12:01:24:354878+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 09 12:01:26 compute-0 ceph-mon[74388]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : last_changed 2025-12-09T12:01:22.103720+0000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : created 2025-12-09T12:01:22.103720+0000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap 
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 09 12:01:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 09 12:01:26 compute-0 podman[74389]: 2025-12-09 12:01:26.356222588 +0000 UTC m=+0.039862995 container create f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 09 12:01:26 compute-0 systemd[1]: Started libpod-conmon-f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93.scope.
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: monmap epoch 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:26 compute-0 ceph-mon[74388]: last_changed 2025-12-09T12:01:22.103720+0000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: created 2025-12-09T12:01:22.103720+0000
Dec 09 12:01:26 compute-0 ceph-mon[74388]: min_mon_release 19 (squid)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: election_strategy: 1
Dec 09 12:01:26 compute-0 ceph-mon[74388]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 12:01:26 compute-0 ceph-mon[74388]: fsmap 
Dec 09 12:01:26 compute-0 ceph-mon[74388]: osdmap e1: 0 total, 0 up, 0 in
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mgrmap e1: no daemons active
Dec 09 12:01:26 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ee9b093e07ed29c2c3203f19af27e47911b40bc6d74fb63d372ed4a51c649b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ee9b093e07ed29c2c3203f19af27e47911b40bc6d74fb63d372ed4a51c649b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ee9b093e07ed29c2c3203f19af27e47911b40bc6d74fb63d372ed4a51c649b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 podman[74389]: 2025-12-09 12:01:26.339217462 +0000 UTC m=+0.022857889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:26 compute-0 podman[74389]: 2025-12-09 12:01:26.437853722 +0000 UTC m=+0.121494119 container init f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:26 compute-0 podman[74389]: 2025-12-09 12:01:26.44752084 +0000 UTC m=+0.131161247 container start f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 09 12:01:26 compute-0 podman[74389]: 2025-12-09 12:01:26.450906312 +0000 UTC m=+0.134546719 container attach f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 12:01:26 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 09 12:01:26 compute-0 systemd[1]: libpod-f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93.scope: Deactivated successfully.
Dec 09 12:01:26 compute-0 podman[74389]: 2025-12-09 12:01:26.677257542 +0000 UTC m=+0.360897949 container died f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:26 compute-0 podman[74389]: 2025-12-09 12:01:26.709863133 +0000 UTC m=+0.393503540 container remove f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 12:01:26 compute-0 systemd[1]: libpod-conmon-f7103fa49317f205a3dcfe9601e0aa1bcaeef6428f6ec6803e1a2d95ee6bed93.scope: Deactivated successfully.
Dec 09 12:01:26 compute-0 podman[74481]: 2025-12-09 12:01:26.790459369 +0000 UTC m=+0.056394603 container create eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848 (image=quay.io/ceph/ceph:v19, name=vibrant_robinson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 09 12:01:26 compute-0 systemd[1]: Started libpod-conmon-eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848.scope.
Dec 09 12:01:26 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8135613731f3122b68ce289175f9ba3b16d9c0f5e74cf7763089625166c126d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8135613731f3122b68ce289175f9ba3b16d9c0f5e74cf7763089625166c126d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8135613731f3122b68ce289175f9ba3b16d9c0f5e74cf7763089625166c126d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:26 compute-0 podman[74481]: 2025-12-09 12:01:26.853440243 +0000 UTC m=+0.119375497 container init eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848 (image=quay.io/ceph/ceph:v19, name=vibrant_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 12:01:26 compute-0 podman[74481]: 2025-12-09 12:01:26.859267504 +0000 UTC m=+0.125202738 container start eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848 (image=quay.io/ceph/ceph:v19, name=vibrant_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 09 12:01:26 compute-0 podman[74481]: 2025-12-09 12:01:26.767274352 +0000 UTC m=+0.033209606 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:26 compute-0 podman[74481]: 2025-12-09 12:01:26.864212496 +0000 UTC m=+0.130147740 container attach eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848 (image=quay.io/ceph/ceph:v19, name=vibrant_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 12:01:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 09 12:01:27 compute-0 systemd[1]: libpod-eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848.scope: Deactivated successfully.
Dec 09 12:01:27 compute-0 podman[74481]: 2025-12-09 12:01:27.071286131 +0000 UTC m=+0.337221365 container died eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848 (image=quay.io/ceph/ceph:v19, name=vibrant_robinson, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8135613731f3122b68ce289175f9ba3b16d9c0f5e74cf7763089625166c126d-merged.mount: Deactivated successfully.
Dec 09 12:01:27 compute-0 podman[74481]: 2025-12-09 12:01:27.102583347 +0000 UTC m=+0.368518581 container remove eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848 (image=quay.io/ceph/ceph:v19, name=vibrant_robinson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 12:01:27 compute-0 systemd[1]: libpod-conmon-eda29f8c33a40f59e6aba0e914d3c809bc81254773dd4a0a2974962f7448e848.scope: Deactivated successfully.
Dec 09 12:01:27 compute-0 systemd[1]: Reloading.
Dec 09 12:01:27 compute-0 systemd-rc-local-generator[74563]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:01:27 compute-0 systemd-sysv-generator[74566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:01:27 compute-0 systemd[1]: Reloading.
Dec 09 12:01:27 compute-0 systemd-rc-local-generator[74604]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:01:27 compute-0 systemd-sysv-generator[74608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:01:27 compute-0 systemd[1]: Starting Ceph mgr.compute-0.wfxreg for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:01:27 compute-0 podman[74660]: 2025-12-09 12:01:27.886249825 +0000 UTC m=+0.042318153 container create eba899a894022afb7b791a50313d517b24448608924b1077dd900057da1ac59c (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 12:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f813223bcac78d7a4fe7ebf46f956d127065c95a0342a5e6d854faf5ce8ff28e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f813223bcac78d7a4fe7ebf46f956d127065c95a0342a5e6d854faf5ce8ff28e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f813223bcac78d7a4fe7ebf46f956d127065c95a0342a5e6d854faf5ce8ff28e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f813223bcac78d7a4fe7ebf46f956d127065c95a0342a5e6d854faf5ce8ff28e/merged/var/lib/ceph/mgr/ceph-compute-0.wfxreg supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:27 compute-0 podman[74660]: 2025-12-09 12:01:27.94518299 +0000 UTC m=+0.101251338 container init eba899a894022afb7b791a50313d517b24448608924b1077dd900057da1ac59c (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec 09 12:01:27 compute-0 podman[74660]: 2025-12-09 12:01:27.950650541 +0000 UTC m=+0.106718869 container start eba899a894022afb7b791a50313d517b24448608924b1077dd900057da1ac59c (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 09 12:01:27 compute-0 bash[74660]: eba899a894022afb7b791a50313d517b24448608924b1077dd900057da1ac59c
Dec 09 12:01:27 compute-0 podman[74660]: 2025-12-09 12:01:27.866926638 +0000 UTC m=+0.022994996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:27 compute-0 systemd[1]: Started Ceph mgr.compute-0.wfxreg for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:01:27 compute-0 ceph-mgr[74679]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 12:01:27 compute-0 ceph-mgr[74679]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 09 12:01:27 compute-0 ceph-mgr[74679]: pidfile_write: ignore empty --pid-file
Dec 09 12:01:28 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'alerts'
Dec 09 12:01:28 compute-0 podman[74680]: 2025-12-09 12:01:28.031729324 +0000 UTC m=+0.044409323 container create 1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6 (image=quay.io/ceph/ceph:v19, name=vigilant_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 12:01:28 compute-0 systemd[1]: Started libpod-conmon-1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6.scope.
Dec 09 12:01:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d1339a107aa08f086b965f66e59e445db6263f711ecd27f71519d0c765287f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d1339a107aa08f086b965f66e59e445db6263f711ecd27f71519d0c765287f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d1339a107aa08f086b965f66e59e445db6263f711ecd27f71519d0c765287f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:28 compute-0 podman[74680]: 2025-12-09 12:01:28.012188635 +0000 UTC m=+0.024868644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:28 compute-0 podman[74680]: 2025-12-09 12:01:28.10986676 +0000 UTC m=+0.122546769 container init 1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6 (image=quay.io/ceph/ceph:v19, name=vigilant_tesla, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:28 compute-0 podman[74680]: 2025-12-09 12:01:28.117777688 +0000 UTC m=+0.130457677 container start 1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6 (image=quay.io/ceph/ceph:v19, name=vigilant_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:28 compute-0 podman[74680]: 2025-12-09 12:01:28.120920197 +0000 UTC m=+0.133600186 container attach 1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6 (image=quay.io/ceph/ceph:v19, name=vigilant_tesla, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 12:01:28 compute-0 ceph-mgr[74679]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:01:28 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'balancer'
Dec 09 12:01:28 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:28.158+0000 7f0409596140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:01:28 compute-0 ceph-mgr[74679]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:01:28 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'cephadm'
Dec 09 12:01:28 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:28.289+0000 7f0409596140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:01:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 12:01:28 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711200818' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]: 
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]: {
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "health": {
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "status": "HEALTH_OK",
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "checks": {},
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "mutes": []
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     },
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "election_epoch": 5,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "quorum": [
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         0
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     ],
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "quorum_names": [
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "compute-0"
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     ],
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "quorum_age": 2,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "monmap": {
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "epoch": 1,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "min_mon_release_name": "squid",
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_mons": 1
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     },
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "osdmap": {
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "epoch": 1,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_osds": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_up_osds": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "osd_up_since": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_in_osds": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "osd_in_since": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_remapped_pgs": 0
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     },
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "pgmap": {
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "pgs_by_state": [],
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_pgs": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_pools": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_objects": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "data_bytes": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "bytes_used": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "bytes_avail": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "bytes_total": 0
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     },
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "fsmap": {
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "epoch": 1,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "btime": "2025-12-09T12:01:24:354878+0000",
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "by_rank": [],
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "up:standby": 0
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     },
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "mgrmap": {
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "available": false,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "num_standbys": 0,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "modules": [
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:             "iostat",
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:             "nfs",
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:             "restful"
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         ],
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "services": {}
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     },
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "servicemap": {
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "epoch": 1,
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "modified": "2025-12-09T12:01:24.356907+0000",
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:         "services": {}
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     },
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]:     "progress_events": {}
Dec 09 12:01:28 compute-0 vigilant_tesla[74716]: }
Dec 09 12:01:28 compute-0 systemd[1]: libpod-1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6.scope: Deactivated successfully.
Dec 09 12:01:28 compute-0 podman[74680]: 2025-12-09 12:01:28.379095472 +0000 UTC m=+0.391775461 container died 1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6 (image=quay.io/ceph/ceph:v19, name=vigilant_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 09 12:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-94d1339a107aa08f086b965f66e59e445db6263f711ecd27f71519d0c765287f-merged.mount: Deactivated successfully.
Dec 09 12:01:28 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3711200818' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:28 compute-0 podman[74680]: 2025-12-09 12:01:28.418714812 +0000 UTC m=+0.431394801 container remove 1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6 (image=quay.io/ceph/ceph:v19, name=vigilant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 09 12:01:28 compute-0 systemd[1]: libpod-conmon-1c2db842fa6a5471e4cc8909b253c576fa2564f95a2b5fec31d98db40fca0cc6.scope: Deactivated successfully.
Dec 09 12:01:29 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'crash'
Dec 09 12:01:29 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:29.383+0000 7f0409596140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:01:29 compute-0 ceph-mgr[74679]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:01:29 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'dashboard'
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'devicehealth'
Dec 09 12:01:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:30.170+0000 7f0409596140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'diskprediction_local'
Dec 09 12:01:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 09 12:01:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 09 12:01:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   from numpy import show_config as show_numpy_config
Dec 09 12:01:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:30.362+0000 7f0409596140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'influx'
Dec 09 12:01:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:30.447+0000 7f0409596140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'insights'
Dec 09 12:01:30 compute-0 podman[74764]: 2025-12-09 12:01:30.486382181 +0000 UTC m=+0.042517865 container create ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9 (image=quay.io/ceph/ceph:v19, name=unruffled_mclean, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:30 compute-0 systemd[1]: Started libpod-conmon-ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9.scope.
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'iostat'
Dec 09 12:01:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1337c6bf52e2129fbddf0a57f58b3b17a6b7869c1faf827ca43daa9006cd478/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1337c6bf52e2129fbddf0a57f58b3b17a6b7869c1faf827ca43daa9006cd478/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1337c6bf52e2129fbddf0a57f58b3b17a6b7869c1faf827ca43daa9006cd478/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:30 compute-0 podman[74764]: 2025-12-09 12:01:30.552511634 +0000 UTC m=+0.108647338 container init ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9 (image=quay.io/ceph/ceph:v19, name=unruffled_mclean, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 09 12:01:30 compute-0 podman[74764]: 2025-12-09 12:01:30.558221899 +0000 UTC m=+0.114357583 container start ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9 (image=quay.io/ceph/ceph:v19, name=unruffled_mclean, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:30 compute-0 podman[74764]: 2025-12-09 12:01:30.561792792 +0000 UTC m=+0.117928496 container attach ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9 (image=quay.io/ceph/ceph:v19, name=unruffled_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:30 compute-0 podman[74764]: 2025-12-09 12:01:30.469035646 +0000 UTC m=+0.025171350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:30.602+0000 7f0409596140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:01:30 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'k8sevents'
Dec 09 12:01:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 12:01:30 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/372554607' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]: 
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]: {
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "health": {
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "status": "HEALTH_OK",
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "checks": {},
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "mutes": []
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     },
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "election_epoch": 5,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "quorum": [
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         0
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     ],
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "quorum_names": [
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "compute-0"
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     ],
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "quorum_age": 4,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "monmap": {
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "epoch": 1,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "min_mon_release_name": "squid",
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_mons": 1
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     },
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "osdmap": {
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "epoch": 1,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_osds": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_up_osds": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "osd_up_since": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_in_osds": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "osd_in_since": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_remapped_pgs": 0
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     },
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "pgmap": {
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "pgs_by_state": [],
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_pgs": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_pools": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_objects": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "data_bytes": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "bytes_used": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "bytes_avail": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "bytes_total": 0
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     },
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "fsmap": {
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "epoch": 1,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "btime": "2025-12-09T12:01:24:354878+0000",
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "by_rank": [],
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "up:standby": 0
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     },
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "mgrmap": {
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "available": false,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "num_standbys": 0,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "modules": [
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:             "iostat",
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:             "nfs",
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:             "restful"
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         ],
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "services": {}
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     },
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "servicemap": {
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "epoch": 1,
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "modified": "2025-12-09T12:01:24.356907+0000",
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:         "services": {}
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     },
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]:     "progress_events": {}
Dec 09 12:01:30 compute-0 unruffled_mclean[74781]: }
Dec 09 12:01:30 compute-0 systemd[1]: libpod-ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9.scope: Deactivated successfully.
Dec 09 12:01:30 compute-0 podman[74764]: 2025-12-09 12:01:30.799093163 +0000 UTC m=+0.355228867 container died ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9 (image=quay.io/ceph/ceph:v19, name=unruffled_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1337c6bf52e2129fbddf0a57f58b3b17a6b7869c1faf827ca43daa9006cd478-merged.mount: Deactivated successfully.
Dec 09 12:01:30 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/372554607' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:30 compute-0 podman[74764]: 2025-12-09 12:01:30.872883192 +0000 UTC m=+0.429018876 container remove ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9 (image=quay.io/ceph/ceph:v19, name=unruffled_mclean, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:30 compute-0 systemd[1]: libpod-conmon-ea0e9a6f1070d35ab8e089ed4af5ded423f9982e9ba8744fce5c1d47b57bbaf9.scope: Deactivated successfully.
Dec 09 12:01:31 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'localpool'
Dec 09 12:01:31 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mds_autoscaler'
Dec 09 12:01:31 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mirroring'
Dec 09 12:01:31 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'nfs'
Dec 09 12:01:31 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:31.793+0000 7f0409596140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:01:31 compute-0 ceph-mgr[74679]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:01:31 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'orchestrator'
Dec 09 12:01:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:32.050+0000 7f0409596140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_perf_query'
Dec 09 12:01:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:32.161+0000 7f0409596140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_support'
Dec 09 12:01:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:32.249+0000 7f0409596140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'pg_autoscaler'
Dec 09 12:01:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:32.338+0000 7f0409596140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'progress'
Dec 09 12:01:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:32.420+0000 7f0409596140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'prometheus'
Dec 09 12:01:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:32.821+0000 7f0409596140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rbd_support'
Dec 09 12:01:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:32.933+0000 7f0409596140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:01:32 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'restful'
Dec 09 12:01:32 compute-0 podman[74818]: 2025-12-09 12:01:32.956548798 +0000 UTC m=+0.046807058 container create af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd (image=quay.io/ceph/ceph:v19, name=exciting_bell, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:32 compute-0 systemd[1]: Started libpod-conmon-af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd.scope.
Dec 09 12:01:33 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a52d3528689c2c5bc18683cb2a7f00dfb2ba2563f16f415dfa9e508a92f2c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a52d3528689c2c5bc18683cb2a7f00dfb2ba2563f16f415dfa9e508a92f2c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a52d3528689c2c5bc18683cb2a7f00dfb2ba2563f16f415dfa9e508a92f2c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:33 compute-0 podman[74818]: 2025-12-09 12:01:33.026128329 +0000 UTC m=+0.116386599 container init af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd (image=quay.io/ceph/ceph:v19, name=exciting_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 12:01:33 compute-0 podman[74818]: 2025-12-09 12:01:33.032101707 +0000 UTC m=+0.122359977 container start af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd (image=quay.io/ceph/ceph:v19, name=exciting_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Dec 09 12:01:33 compute-0 podman[74818]: 2025-12-09 12:01:32.938767179 +0000 UTC m=+0.029025459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:33 compute-0 podman[74818]: 2025-12-09 12:01:33.0393764 +0000 UTC m=+0.129634660 container attach af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd (image=quay.io/ceph/ceph:v19, name=exciting_bell, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:33 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rgw'
Dec 09 12:01:33 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 12:01:33 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071789447' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:33 compute-0 exciting_bell[74834]: 
Dec 09 12:01:33 compute-0 exciting_bell[74834]: {
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "health": {
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "status": "HEALTH_OK",
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "checks": {},
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "mutes": []
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     },
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "election_epoch": 5,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "quorum": [
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         0
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     ],
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "quorum_names": [
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "compute-0"
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     ],
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "quorum_age": 6,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "monmap": {
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "epoch": 1,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "min_mon_release_name": "squid",
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_mons": 1
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     },
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "osdmap": {
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "epoch": 1,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_osds": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_up_osds": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "osd_up_since": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_in_osds": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "osd_in_since": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_remapped_pgs": 0
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     },
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "pgmap": {
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "pgs_by_state": [],
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_pgs": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_pools": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_objects": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "data_bytes": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "bytes_used": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "bytes_avail": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "bytes_total": 0
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     },
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "fsmap": {
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "epoch": 1,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "btime": "2025-12-09T12:01:24:354878+0000",
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "by_rank": [],
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "up:standby": 0
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     },
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "mgrmap": {
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "available": false,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "num_standbys": 0,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "modules": [
Dec 09 12:01:33 compute-0 exciting_bell[74834]:             "iostat",
Dec 09 12:01:33 compute-0 exciting_bell[74834]:             "nfs",
Dec 09 12:01:33 compute-0 exciting_bell[74834]:             "restful"
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         ],
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "services": {}
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     },
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "servicemap": {
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "epoch": 1,
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "modified": "2025-12-09T12:01:24.356907+0000",
Dec 09 12:01:33 compute-0 exciting_bell[74834]:         "services": {}
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     },
Dec 09 12:01:33 compute-0 exciting_bell[74834]:     "progress_events": {}
Dec 09 12:01:33 compute-0 exciting_bell[74834]: }
Dec 09 12:01:33 compute-0 systemd[1]: libpod-af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd.scope: Deactivated successfully.
Dec 09 12:01:33 compute-0 podman[74818]: 2025-12-09 12:01:33.3671878 +0000 UTC m=+0.457446070 container died af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd (image=quay.io/ceph/ceph:v19, name=exciting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 12:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-51a52d3528689c2c5bc18683cb2a7f00dfb2ba2563f16f415dfa9e508a92f2c1-merged.mount: Deactivated successfully.
Dec 09 12:01:33 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1071789447' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:33 compute-0 podman[74818]: 2025-12-09 12:01:33.404860038 +0000 UTC m=+0.495118298 container remove af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd (image=quay.io/ceph/ceph:v19, name=exciting_bell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 12:01:33 compute-0 systemd[1]: libpod-conmon-af639236373c03c866877a318c13d893538d0ff82f566c44c5faff11a0a9a6cd.scope: Deactivated successfully.
Dec 09 12:01:33 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:33.559+0000 7f0409596140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:01:33 compute-0 ceph-mgr[74679]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:01:33 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rook'
Dec 09 12:01:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:34.269+0000 7f0409596140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'selftest'
Dec 09 12:01:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:34.342+0000 7f0409596140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'snap_schedule'
Dec 09 12:01:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:34.430+0000 7f0409596140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'stats'
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'status'
Dec 09 12:01:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:34.626+0000 7f0409596140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telegraf'
Dec 09 12:01:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:34.723+0000 7f0409596140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telemetry'
Dec 09 12:01:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:34.941+0000 7f0409596140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:01:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'test_orchestrator'
Dec 09 12:01:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:35.194+0000 7f0409596140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'volumes'
Dec 09 12:01:35 compute-0 podman[74871]: 2025-12-09 12:01:35.671298901 +0000 UTC m=+0.236430823 container create b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c (image=quay.io/ceph/ceph:v19, name=charming_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 12:01:35 compute-0 systemd[1]: Started libpod-conmon-b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c.scope.
Dec 09 12:01:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:35.718+0000 7f0409596140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'zabbix'
Dec 09 12:01:35 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ea662a43729e2e25f95137fbd125447d484f6bdc54dd64830e204e8aef54e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ea662a43729e2e25f95137fbd125447d484f6bdc54dd64830e204e8aef54e2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ea662a43729e2e25f95137fbd125447d484f6bdc54dd64830e204e8aef54e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:35 compute-0 podman[74871]: 2025-12-09 12:01:35.654627705 +0000 UTC m=+0.219759647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:35 compute-0 podman[74871]: 2025-12-09 12:01:35.758268708 +0000 UTC m=+0.323400650 container init b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c (image=quay.io/ceph/ceph:v19, name=charming_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:35 compute-0 podman[74871]: 2025-12-09 12:01:35.764170083 +0000 UTC m=+0.329302005 container start b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c (image=quay.io/ceph/ceph:v19, name=charming_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 09 12:01:35 compute-0 podman[74871]: 2025-12-09 12:01:35.767274709 +0000 UTC m=+0.332406651 container attach b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c (image=quay.io/ceph/ceph:v19, name=charming_saha, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 09 12:01:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:35.817+0000 7f0409596140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: ms_deliver_dispatch: unhandled message 0x559dec1f69c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wfxreg
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.wfxreg(active, starting, since 0.0139095s)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr handle_mgr_map Activating!
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr handle_mgr_map I am now activating
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e1 all = 1
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"} v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: balancer
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Manager daemon compute-0.wfxreg is now available
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: crash
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [balancer INFO root] Starting
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: devicehealth
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [balancer INFO root] Optimize plan auto_2025-12-09_12:01:35
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [balancer INFO root] do_upmap
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [balancer INFO root] No pools available
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [devicehealth INFO root] Starting
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: iostat
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: nfs
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: orchestrator
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: pg_autoscaler
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: progress
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [progress INFO root] Loading...
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [progress INFO root] No stored events to load
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [progress INFO root] Loaded [] historic events
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [progress INFO root] Loaded OSDMap, ready.
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: Activating manager daemon compute-0.wfxreg
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mgrmap e2: compute-0.wfxreg(active, starting, since 0.0139095s)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: Manager daemon compute-0.wfxreg is now available
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] recovery thread starting
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] starting setup
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: rbd_support
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: restful
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [restful INFO root] server_addr: :: server_port: 8003
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"} v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: status
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [restful WARNING root] server not running: no certificate configured
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: telemetry
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] PerfHandler: starting
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TaskHandler: starting
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"} v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"}]: dispatch
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: [rbd_support INFO root] setup complete
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:35 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: volumes
Dec 09 12:01:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 12:01:35 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/64266713' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:35 compute-0 charming_saha[74887]: 
Dec 09 12:01:35 compute-0 charming_saha[74887]: {
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "health": {
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "status": "HEALTH_OK",
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "checks": {},
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "mutes": []
Dec 09 12:01:35 compute-0 charming_saha[74887]:     },
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "election_epoch": 5,
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "quorum": [
Dec 09 12:01:35 compute-0 charming_saha[74887]:         0
Dec 09 12:01:35 compute-0 charming_saha[74887]:     ],
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "quorum_names": [
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "compute-0"
Dec 09 12:01:35 compute-0 charming_saha[74887]:     ],
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "quorum_age": 9,
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "monmap": {
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "epoch": 1,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "min_mon_release_name": "squid",
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_mons": 1
Dec 09 12:01:35 compute-0 charming_saha[74887]:     },
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "osdmap": {
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "epoch": 1,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_osds": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_up_osds": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "osd_up_since": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_in_osds": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "osd_in_since": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_remapped_pgs": 0
Dec 09 12:01:35 compute-0 charming_saha[74887]:     },
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "pgmap": {
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "pgs_by_state": [],
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_pgs": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_pools": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_objects": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "data_bytes": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "bytes_used": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "bytes_avail": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "bytes_total": 0
Dec 09 12:01:35 compute-0 charming_saha[74887]:     },
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "fsmap": {
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "epoch": 1,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "btime": "2025-12-09T12:01:24:354878+0000",
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "by_rank": [],
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "up:standby": 0
Dec 09 12:01:35 compute-0 charming_saha[74887]:     },
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "mgrmap": {
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "available": false,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "num_standbys": 0,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "modules": [
Dec 09 12:01:35 compute-0 charming_saha[74887]:             "iostat",
Dec 09 12:01:35 compute-0 charming_saha[74887]:             "nfs",
Dec 09 12:01:35 compute-0 charming_saha[74887]:             "restful"
Dec 09 12:01:35 compute-0 charming_saha[74887]:         ],
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "services": {}
Dec 09 12:01:35 compute-0 charming_saha[74887]:     },
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "servicemap": {
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "epoch": 1,
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "modified": "2025-12-09T12:01:24.356907+0000",
Dec 09 12:01:35 compute-0 charming_saha[74887]:         "services": {}
Dec 09 12:01:35 compute-0 charming_saha[74887]:     },
Dec 09 12:01:35 compute-0 charming_saha[74887]:     "progress_events": {}
Dec 09 12:01:35 compute-0 charming_saha[74887]: }
Dec 09 12:01:36 compute-0 systemd[1]: libpod-b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c.scope: Deactivated successfully.
Dec 09 12:01:36 compute-0 podman[74871]: 2025-12-09 12:01:36.016139547 +0000 UTC m=+0.581271469 container died b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c (image=quay.io/ceph/ceph:v19, name=charming_saha, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 09 12:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1ea662a43729e2e25f95137fbd125447d484f6bdc54dd64830e204e8aef54e2-merged.mount: Deactivated successfully.
Dec 09 12:01:36 compute-0 podman[74871]: 2025-12-09 12:01:36.053278245 +0000 UTC m=+0.618410167 container remove b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c (image=quay.io/ceph/ceph:v19, name=charming_saha, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:36 compute-0 systemd[1]: libpod-conmon-b2d9b90af9811fef4af9028426a5e903caa5b316723ef2ac06ec41bf3cebaa9c.scope: Deactivated successfully.
Dec 09 12:01:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.wfxreg(active, since 1.02513s)
Dec 09 12:01:36 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"}]: dispatch
Dec 09 12:01:36 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"}]: dispatch
Dec 09 12:01:36 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:36 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:36 compute-0 ceph-mon[74388]: from='mgr.14102 192.168.122.100:0/3326211942' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:36 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/64266713' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:36 compute-0 ceph-mon[74388]: mgrmap e3: compute-0.wfxreg(active, since 1.02513s)
Dec 09 12:01:37 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:01:37 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.wfxreg(active, since 2s)
Dec 09 12:01:38 compute-0 podman[75004]: 2025-12-09 12:01:38.116647379 +0000 UTC m=+0.040839839 container create 14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18 (image=quay.io/ceph/ceph:v19, name=loving_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:38 compute-0 systemd[1]: Started libpod-conmon-14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18.scope.
Dec 09 12:01:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d98c098cae3ac4fe0c363339b1b9d985b6ce13097e7b5f10edfb43d4e0acba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d98c098cae3ac4fe0c363339b1b9d985b6ce13097e7b5f10edfb43d4e0acba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d98c098cae3ac4fe0c363339b1b9d985b6ce13097e7b5f10edfb43d4e0acba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:38 compute-0 podman[75004]: 2025-12-09 12:01:38.177181046 +0000 UTC m=+0.101373536 container init 14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18 (image=quay.io/ceph/ceph:v19, name=loving_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:38 compute-0 podman[75004]: 2025-12-09 12:01:38.182481917 +0000 UTC m=+0.106674377 container start 14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18 (image=quay.io/ceph/ceph:v19, name=loving_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 09 12:01:38 compute-0 podman[75004]: 2025-12-09 12:01:38.186026038 +0000 UTC m=+0.110218528 container attach 14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18 (image=quay.io/ceph/ceph:v19, name=loving_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:38 compute-0 podman[75004]: 2025-12-09 12:01:38.100878044 +0000 UTC m=+0.025070524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:38 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 09 12:01:38 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1871371554' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:38 compute-0 loving_chaum[75021]: 
Dec 09 12:01:38 compute-0 loving_chaum[75021]: {
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "health": {
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "status": "HEALTH_OK",
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "checks": {},
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "mutes": []
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     },
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "election_epoch": 5,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "quorum": [
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         0
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     ],
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "quorum_names": [
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "compute-0"
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     ],
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "quorum_age": 12,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "monmap": {
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "epoch": 1,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "min_mon_release_name": "squid",
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_mons": 1
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     },
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "osdmap": {
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "epoch": 1,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_osds": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_up_osds": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "osd_up_since": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_in_osds": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "osd_in_since": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_remapped_pgs": 0
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     },
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "pgmap": {
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "pgs_by_state": [],
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_pgs": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_pools": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_objects": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "data_bytes": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "bytes_used": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "bytes_avail": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "bytes_total": 0
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     },
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "fsmap": {
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "epoch": 1,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "btime": "2025-12-09T12:01:24:354878+0000",
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "by_rank": [],
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "up:standby": 0
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     },
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "mgrmap": {
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "available": true,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "num_standbys": 0,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "modules": [
Dec 09 12:01:38 compute-0 loving_chaum[75021]:             "iostat",
Dec 09 12:01:38 compute-0 loving_chaum[75021]:             "nfs",
Dec 09 12:01:38 compute-0 loving_chaum[75021]:             "restful"
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         ],
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "services": {}
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     },
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "servicemap": {
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "epoch": 1,
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "modified": "2025-12-09T12:01:24.356907+0000",
Dec 09 12:01:38 compute-0 loving_chaum[75021]:         "services": {}
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     },
Dec 09 12:01:38 compute-0 loving_chaum[75021]:     "progress_events": {}
Dec 09 12:01:38 compute-0 loving_chaum[75021]: }
Dec 09 12:01:38 compute-0 systemd[1]: libpod-14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18.scope: Deactivated successfully.
Dec 09 12:01:38 compute-0 podman[75004]: 2025-12-09 12:01:38.68545853 +0000 UTC m=+0.609651000 container died 14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18 (image=quay.io/ceph/ceph:v19, name=loving_chaum, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 09 12:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-93d98c098cae3ac4fe0c363339b1b9d985b6ce13097e7b5f10edfb43d4e0acba-merged.mount: Deactivated successfully.
Dec 09 12:01:38 compute-0 podman[75004]: 2025-12-09 12:01:38.722416358 +0000 UTC m=+0.646608818 container remove 14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18 (image=quay.io/ceph/ceph:v19, name=loving_chaum, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 09 12:01:38 compute-0 systemd[1]: libpod-conmon-14f08379e7a3ed197b45aa8d9d46452f70659a1d84cac93b890d59adc69ecb18.scope: Deactivated successfully.
Dec 09 12:01:38 compute-0 podman[75059]: 2025-12-09 12:01:38.790412158 +0000 UTC m=+0.043143370 container create bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab (image=quay.io/ceph/ceph:v19, name=happy_darwin, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:38 compute-0 systemd[1]: Started libpod-conmon-bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab.scope.
Dec 09 12:01:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b61871640c0b21b7741574f39d79c16c7e5f37dca3a284e11e98aec8022a85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b61871640c0b21b7741574f39d79c16c7e5f37dca3a284e11e98aec8022a85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b61871640c0b21b7741574f39d79c16c7e5f37dca3a284e11e98aec8022a85/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b61871640c0b21b7741574f39d79c16c7e5f37dca3a284e11e98aec8022a85/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:38 compute-0 podman[75059]: 2025-12-09 12:01:38.865897264 +0000 UTC m=+0.118628486 container init bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab (image=quay.io/ceph/ceph:v19, name=happy_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:38 compute-0 podman[75059]: 2025-12-09 12:01:38.769742305 +0000 UTC m=+0.022473547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:38 compute-0 podman[75059]: 2025-12-09 12:01:38.872156578 +0000 UTC m=+0.124887790 container start bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab (image=quay.io/ceph/ceph:v19, name=happy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 12:01:38 compute-0 podman[75059]: 2025-12-09 12:01:38.87605796 +0000 UTC m=+0.128789172 container attach bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab (image=quay.io/ceph/ceph:v19, name=happy_darwin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:38 compute-0 ceph-mon[74388]: mgrmap e4: compute-0.wfxreg(active, since 2s)
Dec 09 12:01:38 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1871371554' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 09 12:01:39 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 09 12:01:39 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4137499452' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 09 12:01:39 compute-0 happy_darwin[75075]: 
Dec 09 12:01:39 compute-0 happy_darwin[75075]: [global]
Dec 09 12:01:39 compute-0 happy_darwin[75075]:         fsid = 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:39 compute-0 happy_darwin[75075]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 09 12:01:39 compute-0 systemd[1]: libpod-bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab.scope: Deactivated successfully.
Dec 09 12:01:39 compute-0 podman[75101]: 2025-12-09 12:01:39.306818324 +0000 UTC m=+0.030662422 container died bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab (image=quay.io/ceph/ceph:v19, name=happy_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Dec 09 12:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2b61871640c0b21b7741574f39d79c16c7e5f37dca3a284e11e98aec8022a85-merged.mount: Deactivated successfully.
Dec 09 12:01:39 compute-0 podman[75101]: 2025-12-09 12:01:39.33970126 +0000 UTC m=+0.063545338 container remove bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab (image=quay.io/ceph/ceph:v19, name=happy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:39 compute-0 systemd[1]: libpod-conmon-bd1f36656e8b412762fe06423c601aeba4a16cd0c7929f47b322a7516d026fab.scope: Deactivated successfully.
Dec 09 12:01:39 compute-0 podman[75116]: 2025-12-09 12:01:39.4503242 +0000 UTC m=+0.046776296 container create 7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 09 12:01:39 compute-0 systemd[1]: Started libpod-conmon-7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb.scope.
Dec 09 12:01:39 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0640838de08d88cbd08d52f4b70511293d50af0069c08b5e5dc2221abda9bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0640838de08d88cbd08d52f4b70511293d50af0069c08b5e5dc2221abda9bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0640838de08d88cbd08d52f4b70511293d50af0069c08b5e5dc2221abda9bc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:39 compute-0 podman[75116]: 2025-12-09 12:01:39.430624872 +0000 UTC m=+0.027076978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:39 compute-0 podman[75116]: 2025-12-09 12:01:39.532423641 +0000 UTC m=+0.128875767 container init 7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 12:01:39 compute-0 podman[75116]: 2025-12-09 12:01:39.543660839 +0000 UTC m=+0.140112925 container start 7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:39 compute-0 podman[75116]: 2025-12-09 12:01:39.547667376 +0000 UTC m=+0.144119502 container attach 7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:39 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:01:39 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4137499452' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 09 12:01:39 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 09 12:01:39 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1078400006' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 09 12:01:40 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1078400006' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 09 12:01:40 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1078400006' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  1: '-n'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  2: 'mgr.compute-0.wfxreg'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  3: '-f'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  4: '--setuser'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  5: 'ceph'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  6: '--setgroup'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  7: 'ceph'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  8: '--default-log-to-file=false'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  9: '--default-log-to-journald=true'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 09 12:01:40 compute-0 ceph-mgr[74679]: mgr respawn  exe_path /proc/self/exe
Dec 09 12:01:40 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.wfxreg(active, since 5s)
Dec 09 12:01:40 compute-0 systemd[1]: libpod-7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb.scope: Deactivated successfully.
Dec 09 12:01:40 compute-0 podman[75116]: 2025-12-09 12:01:40.941222866 +0000 UTC m=+1.537674962 container died 7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c0640838de08d88cbd08d52f4b70511293d50af0069c08b5e5dc2221abda9bc-merged.mount: Deactivated successfully.
Dec 09 12:01:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ignoring --setuser ceph since I am not root
Dec 09 12:01:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ignoring --setgroup ceph since I am not root
Dec 09 12:01:41 compute-0 ceph-mgr[74679]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 09 12:01:41 compute-0 ceph-mgr[74679]: pidfile_write: ignore empty --pid-file
Dec 09 12:01:41 compute-0 podman[75116]: 2025-12-09 12:01:41.05824385 +0000 UTC m=+1.654695946 container remove 7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:41 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'alerts'
Dec 09 12:01:41 compute-0 systemd[1]: libpod-conmon-7332ee1dacfada7eeca644652f7127478172eb740959ebb7cfa4434dd5ba51eb.scope: Deactivated successfully.
Dec 09 12:01:41 compute-0 ceph-mgr[74679]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:01:41 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'balancer'
Dec 09 12:01:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:41.191+0000 7ff2c3208140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:01:41 compute-0 podman[75191]: 2025-12-09 12:01:41.203133415 +0000 UTC m=+0.118150889 container create 9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03 (image=quay.io/ceph/ceph:v19, name=amazing_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 12:01:41 compute-0 podman[75191]: 2025-12-09 12:01:41.117094541 +0000 UTC m=+0.032111995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:41 compute-0 systemd[1]: Started libpod-conmon-9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03.scope.
Dec 09 12:01:41 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf702762f1e3084cd80e58172f1f86e1256cbe9b04443497b81823f2e752a71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf702762f1e3084cd80e58172f1f86e1256cbe9b04443497b81823f2e752a71/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf702762f1e3084cd80e58172f1f86e1256cbe9b04443497b81823f2e752a71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:41 compute-0 podman[75191]: 2025-12-09 12:01:41.279829208 +0000 UTC m=+0.194846662 container init 9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03 (image=quay.io/ceph/ceph:v19, name=amazing_boyd, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 12:01:41 compute-0 podman[75191]: 2025-12-09 12:01:41.285972557 +0000 UTC m=+0.200989991 container start 9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03 (image=quay.io/ceph/ceph:v19, name=amazing_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 09 12:01:41 compute-0 ceph-mgr[74679]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:01:41 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'cephadm'
Dec 09 12:01:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:41.286+0000 7ff2c3208140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:01:41 compute-0 podman[75191]: 2025-12-09 12:01:41.289111826 +0000 UTC m=+0.204129350 container attach 9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03 (image=quay.io/ceph/ceph:v19, name=amazing_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 09 12:01:41 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793045550' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 09 12:01:41 compute-0 amazing_boyd[75207]: {
Dec 09 12:01:41 compute-0 amazing_boyd[75207]:     "epoch": 5,
Dec 09 12:01:41 compute-0 amazing_boyd[75207]:     "available": true,
Dec 09 12:01:41 compute-0 amazing_boyd[75207]:     "active_name": "compute-0.wfxreg",
Dec 09 12:01:41 compute-0 amazing_boyd[75207]:     "num_standby": 0
Dec 09 12:01:41 compute-0 amazing_boyd[75207]: }
Dec 09 12:01:41 compute-0 systemd[1]: libpod-9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03.scope: Deactivated successfully.
Dec 09 12:01:41 compute-0 podman[75191]: 2025-12-09 12:01:41.716560411 +0000 UTC m=+0.631577845 container died 9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03 (image=quay.io/ceph/ceph:v19, name=amazing_boyd, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddf702762f1e3084cd80e58172f1f86e1256cbe9b04443497b81823f2e752a71-merged.mount: Deactivated successfully.
Dec 09 12:01:41 compute-0 podman[75191]: 2025-12-09 12:01:41.756340049 +0000 UTC m=+0.671357483 container remove 9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03 (image=quay.io/ceph/ceph:v19, name=amazing_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 12:01:41 compute-0 systemd[1]: libpod-conmon-9d6e4e044e7f5863efd1595fd36445a4875913d7c7bafb465bb5894eb58dbb03.scope: Deactivated successfully.
Dec 09 12:01:41 compute-0 podman[75257]: 2025-12-09 12:01:41.824329609 +0000 UTC m=+0.042270101 container create 99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6 (image=quay.io/ceph/ceph:v19, name=youthful_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 12:01:41 compute-0 systemd[1]: Started libpod-conmon-99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6.scope.
Dec 09 12:01:41 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba7cf633747969f6737ed4e9eb4e2fee5e7a53ae4e5cda4fef68156036b2604/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba7cf633747969f6737ed4e9eb4e2fee5e7a53ae4e5cda4fef68156036b2604/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba7cf633747969f6737ed4e9eb4e2fee5e7a53ae4e5cda4fef68156036b2604/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:41 compute-0 podman[75257]: 2025-12-09 12:01:41.90150072 +0000 UTC m=+0.119441212 container init 99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6 (image=quay.io/ceph/ceph:v19, name=youthful_keller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 12:01:41 compute-0 podman[75257]: 2025-12-09 12:01:41.806366789 +0000 UTC m=+0.024307301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:41 compute-0 podman[75257]: 2025-12-09 12:01:41.908666357 +0000 UTC m=+0.126606849 container start 99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6 (image=quay.io/ceph/ceph:v19, name=youthful_keller, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:41 compute-0 podman[75257]: 2025-12-09 12:01:41.912454332 +0000 UTC m=+0.130394854 container attach 99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6 (image=quay.io/ceph/ceph:v19, name=youthful_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:41 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1078400006' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 09 12:01:41 compute-0 ceph-mon[74388]: mgrmap e5: compute-0.wfxreg(active, since 5s)
Dec 09 12:01:41 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3793045550' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 09 12:01:42 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'crash'
Dec 09 12:01:42 compute-0 ceph-mgr[74679]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:01:42 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'dashboard'
Dec 09 12:01:42 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:42.222+0000 7ff2c3208140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:01:42 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'devicehealth'
Dec 09 12:01:42 compute-0 ceph-mgr[74679]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:01:42 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'diskprediction_local'
Dec 09 12:01:42 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:42.947+0000 7ff2c3208140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:01:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 09 12:01:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 09 12:01:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   from numpy import show_config as show_numpy_config
Dec 09 12:01:43 compute-0 ceph-mgr[74679]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:01:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'influx'
Dec 09 12:01:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:43.162+0000 7ff2c3208140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:01:43 compute-0 ceph-mgr[74679]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:01:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'insights'
Dec 09 12:01:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:43.257+0000 7ff2c3208140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:01:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'iostat'
Dec 09 12:01:43 compute-0 ceph-mgr[74679]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:01:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:43.433+0000 7ff2c3208140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:01:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'k8sevents'
Dec 09 12:01:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'localpool'
Dec 09 12:01:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mds_autoscaler'
Dec 09 12:01:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mirroring'
Dec 09 12:01:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'nfs'
Dec 09 12:01:44 compute-0 ceph-mgr[74679]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:01:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'orchestrator'
Dec 09 12:01:44 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:44.775+0000 7ff2c3208140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_perf_query'
Dec 09 12:01:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:45.091+0000 7ff2c3208140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_support'
Dec 09 12:01:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:45.181+0000 7ff2c3208140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'pg_autoscaler'
Dec 09 12:01:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:45.259+0000 7ff2c3208140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'progress'
Dec 09 12:01:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:45.342+0000 7ff2c3208140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'prometheus'
Dec 09 12:01:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:45.430+0000 7ff2c3208140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:45.850+0000 7ff2c3208140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rbd_support'
Dec 09 12:01:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:45.956+0000 7ff2c3208140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:01:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'restful'
Dec 09 12:01:46 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rgw'
Dec 09 12:01:46 compute-0 ceph-mgr[74679]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:01:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:46.409+0000 7ff2c3208140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:01:46 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rook'
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:47.030+0000 7ff2c3208140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'selftest'
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:47.101+0000 7ff2c3208140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'snap_schedule'
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'stats'
Dec 09 12:01:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:47.187+0000 7ff2c3208140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'status'
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telegraf'
Dec 09 12:01:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:47.360+0000 7ff2c3208140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telemetry'
Dec 09 12:01:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:47.445+0000 7ff2c3208140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'test_orchestrator'
Dec 09 12:01:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:47.645+0000 7ff2c3208140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:01:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'volumes'
Dec 09 12:01:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:47.894+0000 7ff2c3208140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:01:48 compute-0 ceph-mgr[74679]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:01:48 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'zabbix'
Dec 09 12:01:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:48.193+0000 7ff2c3208140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:01:48 compute-0 ceph-mgr[74679]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:01:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:01:48.269+0000 7ff2c3208140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:01:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wfxreg restarted
Dec 09 12:01:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 09 12:01:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:01:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wfxreg
Dec 09 12:01:48 compute-0 ceph-mgr[74679]: ms_deliver_dispatch: unhandled message 0x5637eb236d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr handle_mgr_map Activating!
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr handle_mgr_map I am now activating
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.wfxreg(active, starting, since 1.55303s)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mon[74388]: Active manager daemon compute-0.wfxreg restarted
Dec 09 12:01:49 compute-0 ceph-mon[74388]: Activating manager daemon compute-0.wfxreg
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e1 all = 1
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: balancer
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Manager daemon compute-0.wfxreg is now available
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [balancer INFO root] Starting
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [balancer INFO root] Optimize plan auto_2025-12-09_12:01:49
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [balancer INFO root] do_upmap
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [balancer INFO root] No pools available
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: cephadm
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: crash
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: devicehealth
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: iostat
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [devicehealth INFO root] Starting
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: nfs
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: orchestrator
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: pg_autoscaler
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: progress
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [progress INFO root] Loading...
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [progress INFO root] No stored events to load
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [progress INFO root] Loaded [] historic events
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [progress INFO root] Loaded OSDMap, ready.
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] recovery thread starting
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] starting setup
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: rbd_support
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: restful
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: status
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [restful INFO root] server_addr: :: server_port: 8003
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: telemetry
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [restful WARNING root] server not running: no certificate configured
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] PerfHandler: starting
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TaskHandler: starting
Dec 09 12:01:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"} v 0)
Dec 09 12:01:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"}]: dispatch
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] setup complete
Dec 09 12:01:49 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: volumes
Dec 09 12:01:50 compute-0 ceph-mon[74388]: osdmap e2: 0 total, 0 up, 0 in
Dec 09 12:01:50 compute-0 ceph-mon[74388]: mgrmap e6: compute-0.wfxreg(active, starting, since 1.55303s)
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: Manager daemon compute-0.wfxreg is now available
Dec 09 12:01:50 compute-0 ceph-mon[74388]: Found migration_current of "None". Setting to last migration.
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.wfxreg(active, since 2s)
Dec 09 12:01:50 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 09 12:01:50 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 09 12:01:50 compute-0 youthful_keller[75274]: {
Dec 09 12:01:50 compute-0 youthful_keller[75274]:     "mgrmap_epoch": 7,
Dec 09 12:01:50 compute-0 youthful_keller[75274]:     "initialized": true
Dec 09 12:01:50 compute-0 youthful_keller[75274]: }
Dec 09 12:01:50 compute-0 systemd[1]: libpod-99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6.scope: Deactivated successfully.
Dec 09 12:01:50 compute-0 podman[75257]: 2025-12-09 12:01:50.886905618 +0000 UTC m=+9.104846110 container died 99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6 (image=quay.io/ceph/ceph:v19, name=youthful_keller, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ba7cf633747969f6737ed4e9eb4e2fee5e7a53ae4e5cda4fef68156036b2604-merged.mount: Deactivated successfully.
Dec 09 12:01:50 compute-0 podman[75257]: 2025-12-09 12:01:50.929334646 +0000 UTC m=+9.147275138 container remove 99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6 (image=quay.io/ceph/ceph:v19, name=youthful_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:50 compute-0 systemd[1]: libpod-conmon-99bc3869d18624809201c51b0f8da5fae2cbd9e50283e4e64a87cfd0e19a33f6.scope: Deactivated successfully.
Dec 09 12:01:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec 09 12:01:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec 09 12:01:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:50 compute-0 podman[75424]: 2025-12-09 12:01:50.992138901 +0000 UTC m=+0.039423529 container create 4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca (image=quay.io/ceph/ceph:v19, name=trusting_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:51 compute-0 systemd[1]: Started libpod-conmon-4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca.scope.
Dec 09 12:01:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ced3592b74ec51bf9e7101b987bea3b70f37972636bbd040a74588939994d2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ced3592b74ec51bf9e7101b987bea3b70f37972636bbd040a74588939994d2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ced3592b74ec51bf9e7101b987bea3b70f37972636bbd040a74588939994d2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:51 compute-0 podman[75424]: 2025-12-09 12:01:51.060048767 +0000 UTC m=+0.107333415 container init 4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca (image=quay.io/ceph/ceph:v19, name=trusting_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 09 12:01:51 compute-0 podman[75424]: 2025-12-09 12:01:51.065369928 +0000 UTC m=+0.112654556 container start 4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca (image=quay.io/ceph/ceph:v19, name=trusting_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 12:01:51 compute-0 podman[75424]: 2025-12-09 12:01:51.069481942 +0000 UTC m=+0.116766570 container attach 4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca (image=quay.io/ceph/ceph:v19, name=trusting_mcclintock, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:51 compute-0 podman[75424]: 2025-12-09 12:01:50.975095043 +0000 UTC m=+0.022379681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920188 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 09 12:01:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 12:01:51 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:01:51 compute-0 systemd[1]: libpod-4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca.scope: Deactivated successfully.
Dec 09 12:01:51 compute-0 podman[75424]: 2025-12-09 12:01:51.479987946 +0000 UTC m=+0.527272574 container died 4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca (image=quay.io/ceph/ceph:v19, name=trusting_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ced3592b74ec51bf9e7101b987bea3b70f37972636bbd040a74588939994d2a-merged.mount: Deactivated successfully.
Dec 09 12:01:51 compute-0 podman[75424]: 2025-12-09 12:01:51.52110979 +0000 UTC m=+0.568394408 container remove 4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca (image=quay.io/ceph/ceph:v19, name=trusting_mcclintock, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 12:01:51 compute-0 systemd[1]: libpod-conmon-4babc59c86d1c6071fdb59c766fcf1fef5d61391022c38794c76e764b611ccca.scope: Deactivated successfully.
Dec 09 12:01:51 compute-0 podman[75479]: 2025-12-09 12:01:51.579707486 +0000 UTC m=+0.038296874 container create 48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4 (image=quay.io/ceph/ceph:v19, name=determined_lovelace, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 09 12:01:51 compute-0 systemd[1]: Started libpod-conmon-48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4.scope.
Dec 09 12:01:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61982b32e31841a1de408b77a84aed8d216abc0757e721f6220662f81063db8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61982b32e31841a1de408b77a84aed8d216abc0757e721f6220662f81063db8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61982b32e31841a1de408b77a84aed8d216abc0757e721f6220662f81063db8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:51 compute-0 podman[75479]: 2025-12-09 12:01:51.562284538 +0000 UTC m=+0.020873936 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:51 compute-0 podman[75479]: 2025-12-09 12:01:51.660541436 +0000 UTC m=+0.119130834 container init 48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4 (image=quay.io/ceph/ceph:v19, name=determined_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:51 compute-0 podman[75479]: 2025-12-09 12:01:51.666633401 +0000 UTC m=+0.125222779 container start 48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4 (image=quay.io/ceph/ceph:v19, name=determined_lovelace, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 12:01:51 compute-0 podman[75479]: 2025-12-09 12:01:51.670284819 +0000 UTC m=+0.128874277 container attach 48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4 (image=quay.io/ceph/ceph:v19, name=determined_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:01:51] ENGINE Bus STARTING
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:01:51] ENGINE Bus STARTING
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:01:51 compute-0 ceph-mon[74388]: mgrmap e7: compute-0.wfxreg(active, since 2s)
Dec 09 12:01:51 compute-0 ceph-mon[74388]: from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 09 12:01:51 compute-0 ceph-mon[74388]: from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 09 12:01:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:01:51] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:01:51] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:01:51] ENGINE Client ('192.168.122.100', 38172) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:01:51] ENGINE Client ('192.168.122.100', 38172) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:01:51] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:01:51] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:01:51] ENGINE Bus STARTED
Dec 09 12:01:51 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:01:51] ENGINE Bus STARTED
Dec 09 12:01:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 12:01:51 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 09 12:01:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: [cephadm INFO root] Set ssh ssh_user
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 09 12:01:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 09 12:01:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: [cephadm INFO root] Set ssh ssh_config
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 09 12:01:52 compute-0 determined_lovelace[75496]: ssh user set to ceph-admin. sudo will be used
Dec 09 12:01:52 compute-0 systemd[1]: libpod-48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4.scope: Deactivated successfully.
Dec 09 12:01:52 compute-0 podman[75479]: 2025-12-09 12:01:52.08197957 +0000 UTC m=+0.540568948 container died 48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4 (image=quay.io/ceph/ceph:v19, name=determined_lovelace, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f61982b32e31841a1de408b77a84aed8d216abc0757e721f6220662f81063db8-merged.mount: Deactivated successfully.
Dec 09 12:01:52 compute-0 podman[75479]: 2025-12-09 12:01:52.122476269 +0000 UTC m=+0.581065647 container remove 48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4 (image=quay.io/ceph/ceph:v19, name=determined_lovelace, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 12:01:52 compute-0 systemd[1]: libpod-conmon-48b17f64f9ec5a69a8b906f9389841a4f80fabbd263cf262db205e630d436ec4.scope: Deactivated successfully.
Dec 09 12:01:52 compute-0 podman[75556]: 2025-12-09 12:01:52.186828623 +0000 UTC m=+0.042524696 container create 7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277 (image=quay.io/ceph/ceph:v19, name=intelligent_brahmagupta, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 12:01:52 compute-0 systemd[1]: Started libpod-conmon-7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277.scope.
Dec 09 12:01:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09274f3ac82dc59f09db08e097ee597dcd95d3d21e7e81aa1a581cad7ffd27b2/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09274f3ac82dc59f09db08e097ee597dcd95d3d21e7e81aa1a581cad7ffd27b2/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09274f3ac82dc59f09db08e097ee597dcd95d3d21e7e81aa1a581cad7ffd27b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09274f3ac82dc59f09db08e097ee597dcd95d3d21e7e81aa1a581cad7ffd27b2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09274f3ac82dc59f09db08e097ee597dcd95d3d21e7e81aa1a581cad7ffd27b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 podman[75556]: 2025-12-09 12:01:52.169826917 +0000 UTC m=+0.025523020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:52 compute-0 podman[75556]: 2025-12-09 12:01:52.271369221 +0000 UTC m=+0.127065304 container init 7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277 (image=quay.io/ceph/ceph:v19, name=intelligent_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:52 compute-0 podman[75556]: 2025-12-09 12:01:52.277138849 +0000 UTC m=+0.132834932 container start 7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277 (image=quay.io/ceph/ceph:v19, name=intelligent_brahmagupta, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:01:52 compute-0 podman[75556]: 2025-12-09 12:01:52.280535212 +0000 UTC m=+0.136231385 container attach 7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277 (image=quay.io/ceph/ceph:v19, name=intelligent_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 09 12:01:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: [cephadm INFO root] Set ssh private key
Dec 09 12:01:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 09 12:01:52 compute-0 systemd[1]: libpod-7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277.scope: Deactivated successfully.
Dec 09 12:01:52 compute-0 podman[75556]: 2025-12-09 12:01:52.65148358 +0000 UTC m=+0.507179663 container died 7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277 (image=quay.io/ceph/ceph:v19, name=intelligent_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 12:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-09274f3ac82dc59f09db08e097ee597dcd95d3d21e7e81aa1a581cad7ffd27b2-merged.mount: Deactivated successfully.
Dec 09 12:01:52 compute-0 podman[75556]: 2025-12-09 12:01:52.708787923 +0000 UTC m=+0.564484006 container remove 7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277 (image=quay.io/ceph/ceph:v19, name=intelligent_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 09 12:01:52 compute-0 systemd[1]: libpod-conmon-7be5d76461ee9838e6ed3712988e2650017ab540ee1903d85f5de08f1fd03277.scope: Deactivated successfully.
Dec 09 12:01:52 compute-0 podman[75611]: 2025-12-09 12:01:52.782706459 +0000 UTC m=+0.050209721 container create 01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481 (image=quay.io/ceph/ceph:v19, name=reverent_hypatia, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:52 compute-0 systemd[1]: Started libpod-conmon-01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481.scope.
Dec 09 12:01:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ab9773fd9dc994318908a80eb4802eb7724e49337489d0b3096f074549d4f/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ab9773fd9dc994318908a80eb4802eb7724e49337489d0b3096f074549d4f/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ab9773fd9dc994318908a80eb4802eb7724e49337489d0b3096f074549d4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ab9773fd9dc994318908a80eb4802eb7724e49337489d0b3096f074549d4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091ab9773fd9dc994318908a80eb4802eb7724e49337489d0b3096f074549d4f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:52 compute-0 podman[75611]: 2025-12-09 12:01:52.853968135 +0000 UTC m=+0.121471407 container init 01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481 (image=quay.io/ceph/ceph:v19, name=reverent_hypatia, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:52 compute-0 podman[75611]: 2025-12-09 12:01:52.760588554 +0000 UTC m=+0.028091846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:52 compute-0 podman[75611]: 2025-12-09 12:01:52.865037413 +0000 UTC m=+0.132540675 container start 01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481 (image=quay.io/ceph/ceph:v19, name=reverent_hypatia, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 12:01:52 compute-0 ceph-mon[74388]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:52 compute-0 ceph-mon[74388]: [09/Dec/2025:12:01:51] ENGINE Bus STARTING
Dec 09 12:01:52 compute-0 ceph-mon[74388]: [09/Dec/2025:12:01:51] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:01:52 compute-0 ceph-mon[74388]: [09/Dec/2025:12:01:51] ENGINE Client ('192.168.122.100', 38172) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:01:52 compute-0 ceph-mon[74388]: [09/Dec/2025:12:01:51] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:01:52 compute-0 ceph-mon[74388]: [09/Dec/2025:12:01:51] ENGINE Bus STARTED
Dec 09 12:01:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:01:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:52 compute-0 podman[75611]: 2025-12-09 12:01:52.868967916 +0000 UTC m=+0.136471178 container attach 01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481 (image=quay.io/ceph/ceph:v19, name=reverent_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Dec 09 12:01:53 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 09 12:01:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:53 compute-0 ceph-mgr[74679]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 09 12:01:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 09 12:01:53 compute-0 systemd[1]: libpod-01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481.scope: Deactivated successfully.
Dec 09 12:01:53 compute-0 podman[75611]: 2025-12-09 12:01:53.240830636 +0000 UTC m=+0.508333898 container died 01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481 (image=quay.io/ceph/ceph:v19, name=reverent_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 12:01:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-091ab9773fd9dc994318908a80eb4802eb7724e49337489d0b3096f074549d4f-merged.mount: Deactivated successfully.
Dec 09 12:01:53 compute-0 podman[75611]: 2025-12-09 12:01:53.286320599 +0000 UTC m=+0.553823871 container remove 01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481 (image=quay.io/ceph/ceph:v19, name=reverent_hypatia, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:53 compute-0 systemd[1]: libpod-conmon-01e0335cb5681d82dd4c78a1854f1c52cc3cb01cbd96ebd4f8f59a8cf0c9c481.scope: Deactivated successfully.
Dec 09 12:01:53 compute-0 podman[75665]: 2025-12-09 12:01:53.342557742 +0000 UTC m=+0.038012360 container create ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63 (image=quay.io/ceph/ceph:v19, name=cranky_golick, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 09 12:01:53 compute-0 systemd[1]: Started libpod-conmon-ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63.scope.
Dec 09 12:01:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aa8f241490d8c1fdea7ea030b5e82da1eae7cff980f41ec3e5941c2cb74f6c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aa8f241490d8c1fdea7ea030b5e82da1eae7cff980f41ec3e5941c2cb74f6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aa8f241490d8c1fdea7ea030b5e82da1eae7cff980f41ec3e5941c2cb74f6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:53 compute-0 podman[75665]: 2025-12-09 12:01:53.327172918 +0000 UTC m=+0.022627556 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:53 compute-0 podman[75665]: 2025-12-09 12:01:53.428220974 +0000 UTC m=+0.123675632 container init ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63 (image=quay.io/ceph/ceph:v19, name=cranky_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 12:01:53 compute-0 podman[75665]: 2025-12-09 12:01:53.434623838 +0000 UTC m=+0.130078476 container start ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63 (image=quay.io/ceph/ceph:v19, name=cranky_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:01:53 compute-0 podman[75665]: 2025-12-09 12:01:53.437794298 +0000 UTC m=+0.133248946 container attach ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63 (image=quay.io/ceph/ceph:v19, name=cranky_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:01:53 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:53 compute-0 cranky_golick[75682]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7arWKqk+PuQOJbreP/xfd5m+9Mcb1kZG6aIAd/ZpcK2Y1lMiht4hzOiV5srOQ2xw2YpnVvXs40FuhUtYwjwSd91P9QaHRA/7QWe2bzR0Qu/mFOiRkuc88v31mX3qpT7bS736UEKf7+9oS7X2Kcn6/7SaGIecwb7ooQ1TFYz0h1hRpf1HpIivGrmdtfGduRQrchmC783MIMhyUnu3AejGRC5LXqcz954DbQBRP27aEfuSq4pphDzJiRRyBWwLWzvw/hsPaao8OxQz8C/USrD/8XCYlFO65MMfPJev2ZeL1pLWtduw4supc6/0uyiDTQpTcxto9aJ5qfTDLR/diTNMlAjXOjE1rlBI2P+q6FsXbo88d6ZmTPPjHiLDReLSybS+vykjYfD+FZudhl2oA07yGdStmlMQ1J6CJTYO0xjP3xAwhOko9tVrvTZBGOF6+qDtYSQos47Jfyu1UKsp/tAu9Dfo1y4J21ZuONc/2V9E+GPJiXPmynWgIQzoUHbXTr+U= zuul@controller
Dec 09 12:01:53 compute-0 systemd[1]: libpod-ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63.scope: Deactivated successfully.
Dec 09 12:01:53 compute-0 podman[75665]: 2025-12-09 12:01:53.813644005 +0000 UTC m=+0.509098643 container died ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63 (image=quay.io/ceph/ceph:v19, name=cranky_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 12:01:53 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:01:54 compute-0 ceph-mon[74388]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:54 compute-0 ceph-mon[74388]: Set ssh ssh_user
Dec 09 12:01:54 compute-0 ceph-mon[74388]: Set ssh ssh_config
Dec 09 12:01:54 compute-0 ceph-mon[74388]: ssh user set to ceph-admin. sudo will be used
Dec 09 12:01:54 compute-0 ceph-mon[74388]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:54 compute-0 ceph-mon[74388]: Set ssh ssh_identity_key
Dec 09 12:01:54 compute-0 ceph-mon[74388]: Set ssh private key
Dec 09 12:01:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:01:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-20aa8f241490d8c1fdea7ea030b5e82da1eae7cff980f41ec3e5941c2cb74f6c-merged.mount: Deactivated successfully.
Dec 09 12:01:54 compute-0 podman[75665]: 2025-12-09 12:01:54.793043394 +0000 UTC m=+1.488498032 container remove ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63 (image=quay.io/ceph/ceph:v19, name=cranky_golick, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:54 compute-0 systemd[1]: libpod-conmon-ead1408a9582a2a4ec5b8c4a46f3f27f93d1f2d04fb7380b5aafc982bfb98c63.scope: Deactivated successfully.
Dec 09 12:01:54 compute-0 podman[75724]: 2025-12-09 12:01:54.860557087 +0000 UTC m=+0.043539993 container create a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847 (image=quay.io/ceph/ceph:v19, name=beautiful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 09 12:01:54 compute-0 systemd[1]: Started libpod-conmon-a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847.scope.
Dec 09 12:01:54 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1208fceeae07d5a80f4a6326fa1a940e7535c79983dc25ecfcb0793b6f8d1a42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1208fceeae07d5a80f4a6326fa1a940e7535c79983dc25ecfcb0793b6f8d1a42/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1208fceeae07d5a80f4a6326fa1a940e7535c79983dc25ecfcb0793b6f8d1a42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:01:54 compute-0 podman[75724]: 2025-12-09 12:01:54.920423435 +0000 UTC m=+0.103406351 container init a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847 (image=quay.io/ceph/ceph:v19, name=beautiful_knuth, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:01:54 compute-0 podman[75724]: 2025-12-09 12:01:54.926840899 +0000 UTC m=+0.109823805 container start a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847 (image=quay.io/ceph/ceph:v19, name=beautiful_knuth, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:01:54 compute-0 podman[75724]: 2025-12-09 12:01:54.937169135 +0000 UTC m=+0.120152071 container attach a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847 (image=quay.io/ceph/ceph:v19, name=beautiful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:01:54 compute-0 podman[75724]: 2025-12-09 12:01:54.843817787 +0000 UTC m=+0.026800713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:01:55 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:55 compute-0 ceph-mon[74388]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:55 compute-0 ceph-mon[74388]: Set ssh ssh_identity_pub
Dec 09 12:01:55 compute-0 ceph-mon[74388]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:55 compute-0 sshd-session[75767]: Accepted publickey for ceph-admin from 192.168.122.100 port 57428 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:55 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 09 12:01:55 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 09 12:01:55 compute-0 systemd-logind[799]: New session 21 of user ceph-admin.
Dec 09 12:01:55 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 09 12:01:55 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 09 12:01:55 compute-0 systemd[75771]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:55 compute-0 sshd-session[75784]: Accepted publickey for ceph-admin from 192.168.122.100 port 57436 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:55 compute-0 systemd-logind[799]: New session 23 of user ceph-admin.
Dec 09 12:01:55 compute-0 systemd[75771]: Queued start job for default target Main User Target.
Dec 09 12:01:55 compute-0 systemd[75771]: Created slice User Application Slice.
Dec 09 12:01:55 compute-0 systemd[75771]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 09 12:01:55 compute-0 systemd[75771]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 12:01:55 compute-0 systemd[75771]: Reached target Paths.
Dec 09 12:01:55 compute-0 systemd[75771]: Reached target Timers.
Dec 09 12:01:55 compute-0 systemd[75771]: Starting D-Bus User Message Bus Socket...
Dec 09 12:01:55 compute-0 systemd[75771]: Starting Create User's Volatile Files and Directories...
Dec 09 12:01:55 compute-0 systemd[75771]: Listening on D-Bus User Message Bus Socket.
Dec 09 12:01:55 compute-0 systemd[75771]: Reached target Sockets.
Dec 09 12:01:55 compute-0 systemd[75771]: Finished Create User's Volatile Files and Directories.
Dec 09 12:01:55 compute-0 systemd[75771]: Reached target Basic System.
Dec 09 12:01:55 compute-0 systemd[75771]: Reached target Main User Target.
Dec 09 12:01:55 compute-0 systemd[75771]: Startup finished in 147ms.
Dec 09 12:01:55 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 09 12:01:55 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 09 12:01:55 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 09 12:01:55 compute-0 sshd-session[75767]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:55 compute-0 sshd-session[75784]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:55 compute-0 sudo[75791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:01:55 compute-0 sudo[75791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:55 compute-0 sudo[75791]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:55 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:01:56 compute-0 sshd-session[75816]: Accepted publickey for ceph-admin from 192.168.122.100 port 57442 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:56 compute-0 systemd-logind[799]: New session 24 of user ceph-admin.
Dec 09 12:01:56 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 09 12:01:56 compute-0 sshd-session[75816]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:56 compute-0 sudo[75820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 09 12:01:56 compute-0 sudo[75820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:56 compute-0 sudo[75820]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:56 compute-0 sshd-session[75845]: Accepted publickey for ceph-admin from 192.168.122.100 port 57458 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052995 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:01:56 compute-0 systemd-logind[799]: New session 25 of user ceph-admin.
Dec 09 12:01:56 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 09 12:01:56 compute-0 ceph-mon[74388]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:01:56 compute-0 sshd-session[75845]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:56 compute-0 sudo[75849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 09 12:01:56 compute-0 sudo[75849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:56 compute-0 sudo[75849]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:56 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 09 12:01:56 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 09 12:01:56 compute-0 sshd-session[75874]: Accepted publickey for ceph-admin from 192.168.122.100 port 57470 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:56 compute-0 systemd-logind[799]: New session 26 of user ceph-admin.
Dec 09 12:01:56 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 09 12:01:56 compute-0 sshd-session[75874]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:56 compute-0 sudo[75878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:56 compute-0 sudo[75878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:56 compute-0 sudo[75878]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:56 compute-0 sshd-session[75903]: Accepted publickey for ceph-admin from 192.168.122.100 port 57474 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:56 compute-0 systemd-logind[799]: New session 27 of user ceph-admin.
Dec 09 12:01:56 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 09 12:01:56 compute-0 sshd-session[75903]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:57 compute-0 sudo[75907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:57 compute-0 sudo[75907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:57 compute-0 sudo[75907]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:57 compute-0 sshd-session[75932]: Accepted publickey for ceph-admin from 192.168.122.100 port 57478 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:57 compute-0 systemd-logind[799]: New session 28 of user ceph-admin.
Dec 09 12:01:57 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 09 12:01:57 compute-0 sshd-session[75932]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:57 compute-0 ceph-mon[74388]: Deploying cephadm binary to compute-0
Dec 09 12:01:57 compute-0 sudo[75936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 09 12:01:57 compute-0 sudo[75936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:57 compute-0 sudo[75936]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:57 compute-0 sshd-session[75961]: Accepted publickey for ceph-admin from 192.168.122.100 port 57492 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:57 compute-0 systemd-logind[799]: New session 29 of user ceph-admin.
Dec 09 12:01:57 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 09 12:01:57 compute-0 sshd-session[75961]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:57 compute-0 sudo[75965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:01:57 compute-0 sudo[75965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:57 compute-0 sudo[75965]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:57 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:01:57 compute-0 sshd-session[75990]: Accepted publickey for ceph-admin from 192.168.122.100 port 57498 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:57 compute-0 systemd-logind[799]: New session 30 of user ceph-admin.
Dec 09 12:01:57 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 09 12:01:58 compute-0 sshd-session[75990]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:58 compute-0 sudo[75994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 09 12:01:58 compute-0 sudo[75994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:58 compute-0 sudo[75994]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:58 compute-0 sshd-session[76019]: Accepted publickey for ceph-admin from 192.168.122.100 port 58228 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:58 compute-0 systemd-logind[799]: New session 31 of user ceph-admin.
Dec 09 12:01:58 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 09 12:01:58 compute-0 sshd-session[76019]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:59 compute-0 sshd-session[76046]: Accepted publickey for ceph-admin from 192.168.122.100 port 58244 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:59 compute-0 systemd-logind[799]: New session 32 of user ceph-admin.
Dec 09 12:01:59 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 09 12:01:59 compute-0 sshd-session[76046]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:59 compute-0 sudo[76050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 09 12:01:59 compute-0 sudo[76050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:01:59 compute-0 sudo[76050]: pam_unix(sudo:session): session closed for user root
Dec 09 12:01:59 compute-0 sshd-session[76075]: Accepted publickey for ceph-admin from 192.168.122.100 port 58256 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:01:59 compute-0 systemd-logind[799]: New session 33 of user ceph-admin.
Dec 09 12:01:59 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 09 12:01:59 compute-0 sshd-session[76075]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:01:59 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:01:59 compute-0 sudo[76079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 09 12:01:59 compute-0 sudo[76079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:00 compute-0 sudo[76079]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 12:02:00 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:00 compute-0 ceph-mgr[74679]: [cephadm INFO root] Added host compute-0
Dec 09 12:02:00 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 09 12:02:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 09 12:02:00 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:02:00 compute-0 beautiful_knuth[75741]: Added host 'compute-0' with addr '192.168.122.100'
Dec 09 12:02:00 compute-0 systemd[1]: libpod-a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847.scope: Deactivated successfully.
Dec 09 12:02:00 compute-0 podman[75724]: 2025-12-09 12:02:00.308205282 +0000 UTC m=+5.491188188 container died a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847 (image=quay.io/ceph/ceph:v19, name=beautiful_knuth, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1208fceeae07d5a80f4a6326fa1a940e7535c79983dc25ecfcb0793b6f8d1a42-merged.mount: Deactivated successfully.
Dec 09 12:02:00 compute-0 sudo[76124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:00 compute-0 sudo[76124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:00 compute-0 sudo[76124]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:00 compute-0 podman[75724]: 2025-12-09 12:02:00.355184858 +0000 UTC m=+5.538167774 container remove a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847 (image=quay.io/ceph/ceph:v19, name=beautiful_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:00 compute-0 systemd[1]: libpod-conmon-a81661eccd75275cc7f2c5f7e4c03b372bc1b217dbdab9d3567a85102538e847.scope: Deactivated successfully.
Dec 09 12:02:00 compute-0 sudo[76159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Dec 09 12:02:00 compute-0 sudo[76159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:00 compute-0 podman[76162]: 2025-12-09 12:02:00.41773771 +0000 UTC m=+0.040105318 container create fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a (image=quay.io/ceph/ceph:v19, name=objective_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:00 compute-0 systemd[1]: Started libpod-conmon-fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a.scope.
Dec 09 12:02:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182901ce3ce1a382726d9ecd7f2466137210af575969568a644d46d70e539c2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182901ce3ce1a382726d9ecd7f2466137210af575969568a644d46d70e539c2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182901ce3ce1a382726d9ecd7f2466137210af575969568a644d46d70e539c2f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:00 compute-0 podman[76162]: 2025-12-09 12:02:00.398502898 +0000 UTC m=+0.020870536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:00 compute-0 podman[76162]: 2025-12-09 12:02:00.507435332 +0000 UTC m=+0.129802940 container init fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a (image=quay.io/ceph/ceph:v19, name=objective_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:02:00 compute-0 podman[76162]: 2025-12-09 12:02:00.516240851 +0000 UTC m=+0.138608459 container start fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a (image=quay.io/ceph/ceph:v19, name=objective_mendel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 09 12:02:00 compute-0 podman[76162]: 2025-12-09 12:02:00.534837097 +0000 UTC m=+0.157204705 container attach fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a (image=quay.io/ceph/ceph:v19, name=objective_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:00 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:00 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 09 12:02:00 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 09 12:02:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 12:02:00 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:00 compute-0 objective_mendel[76201]: Scheduled mon update...
Dec 09 12:02:00 compute-0 systemd[1]: libpod-fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a.scope: Deactivated successfully.
Dec 09 12:02:00 compute-0 podman[76162]: 2025-12-09 12:02:00.897916958 +0000 UTC m=+0.520284566 container died fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a (image=quay.io/ceph/ceph:v19, name=objective_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-182901ce3ce1a382726d9ecd7f2466137210af575969568a644d46d70e539c2f-merged.mount: Deactivated successfully.
Dec 09 12:02:00 compute-0 podman[76162]: 2025-12-09 12:02:00.931079881 +0000 UTC m=+0.553447489 container remove fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a (image=quay.io/ceph/ceph:v19, name=objective_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 09 12:02:00 compute-0 systemd[1]: libpod-conmon-fa018a3ff2e3a63ecfe3826389e3a34194ecbeec643a94934a87c811a3affb3a.scope: Deactivated successfully.
Dec 09 12:02:00 compute-0 podman[76263]: 2025-12-09 12:02:00.9919985 +0000 UTC m=+0.039588269 container create 115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049 (image=quay.io/ceph/ceph:v19, name=inspiring_ride, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:01 compute-0 systemd[1]: Started libpod-conmon-115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049.scope.
Dec 09 12:02:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e81fbe59353b8acbc171da2c030118b2e99da9423269ea25ac89a46841a25a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e81fbe59353b8acbc171da2c030118b2e99da9423269ea25ac89a46841a25a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e81fbe59353b8acbc171da2c030118b2e99da9423269ea25ac89a46841a25a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:01 compute-0 podman[76263]: 2025-12-09 12:02:01.06035293 +0000 UTC m=+0.107942729 container init 115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049 (image=quay.io/ceph/ceph:v19, name=inspiring_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 09 12:02:01 compute-0 podman[76263]: 2025-12-09 12:02:01.065964468 +0000 UTC m=+0.113554247 container start 115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049 (image=quay.io/ceph/ceph:v19, name=inspiring_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 09 12:02:01 compute-0 podman[76263]: 2025-12-09 12:02:01.068927997 +0000 UTC m=+0.116517776 container attach 115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049 (image=quay.io/ceph/ceph:v19, name=inspiring_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 09 12:02:01 compute-0 podman[76263]: 2025-12-09 12:02:00.975255959 +0000 UTC m=+0.022845768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:01 compute-0 ceph-mon[74388]: Added host compute-0
Dec 09 12:02:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 09 12:02:01 compute-0 ceph-mon[74388]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:01 compute-0 ceph-mon[74388]: Saving service mon spec with placement count:5
Dec 09 12:02:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:01 compute-0 podman[76218]: 2025-12-09 12:02:01.3800794 +0000 UTC m=+0.752553692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:01 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:01 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 09 12:02:01 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 09 12:02:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 12:02:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:01 compute-0 inspiring_ride[76280]: Scheduled mgr update...
Dec 09 12:02:01 compute-0 systemd[1]: libpod-115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049.scope: Deactivated successfully.
Dec 09 12:02:01 compute-0 podman[76263]: 2025-12-09 12:02:01.468696231 +0000 UTC m=+0.516286030 container died 115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049 (image=quay.io/ceph/ceph:v19, name=inspiring_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 09 12:02:01 compute-0 podman[76318]: 2025-12-09 12:02:01.481060483 +0000 UTC m=+0.042550027 container create 323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32 (image=quay.io/ceph/ceph:v19, name=distracted_kalam, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 09 12:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1e81fbe59353b8acbc171da2c030118b2e99da9423269ea25ac89a46841a25a-merged.mount: Deactivated successfully.
Dec 09 12:02:01 compute-0 podman[76263]: 2025-12-09 12:02:01.507932268 +0000 UTC m=+0.555522047 container remove 115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049 (image=quay.io/ceph/ceph:v19, name=inspiring_ride, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:01 compute-0 systemd[1]: Started libpod-conmon-323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32.scope.
Dec 09 12:02:01 compute-0 systemd[1]: libpod-conmon-115904330c63b670e0233bd7a415e229c34379fc29b35732174c12eb00e61049.scope: Deactivated successfully.
Dec 09 12:02:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:01 compute-0 podman[76318]: 2025-12-09 12:02:01.555083075 +0000 UTC m=+0.116572639 container init 323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32 (image=quay.io/ceph/ceph:v19, name=distracted_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:01 compute-0 podman[76318]: 2025-12-09 12:02:01.460915849 +0000 UTC m=+0.022405413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:01 compute-0 podman[76318]: 2025-12-09 12:02:01.559900329 +0000 UTC m=+0.121389873 container start 323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32 (image=quay.io/ceph/ceph:v19, name=distracted_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 09 12:02:01 compute-0 podman[76345]: 2025-12-09 12:02:01.561215863 +0000 UTC m=+0.035642284 container create 1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c (image=quay.io/ceph/ceph:v19, name=zealous_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:02:01 compute-0 podman[76318]: 2025-12-09 12:02:01.56767498 +0000 UTC m=+0.129164544 container attach 323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32 (image=quay.io/ceph/ceph:v19, name=distracted_kalam, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:01 compute-0 systemd[1]: Started libpod-conmon-1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c.scope.
Dec 09 12:02:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9e24c71da0d11de1912687e7e6564fb8af3a8d1269ee58195f07a469c0073e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9e24c71da0d11de1912687e7e6564fb8af3a8d1269ee58195f07a469c0073e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9e24c71da0d11de1912687e7e6564fb8af3a8d1269ee58195f07a469c0073e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:01 compute-0 podman[76345]: 2025-12-09 12:02:01.613966197 +0000 UTC m=+0.088392648 container init 1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c (image=quay.io/ceph/ceph:v19, name=zealous_mccarthy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:01 compute-0 podman[76345]: 2025-12-09 12:02:01.62229193 +0000 UTC m=+0.096718361 container start 1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c (image=quay.io/ceph/ceph:v19, name=zealous_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 12:02:01 compute-0 podman[76345]: 2025-12-09 12:02:01.625257788 +0000 UTC m=+0.099684209 container attach 1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c (image=quay.io/ceph/ceph:v19, name=zealous_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 09 12:02:01 compute-0 podman[76345]: 2025-12-09 12:02:01.544851394 +0000 UTC m=+0.019277835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:01 compute-0 distracted_kalam[76346]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 09 12:02:01 compute-0 systemd[1]: libpod-323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32.scope: Deactivated successfully.
Dec 09 12:02:01 compute-0 podman[76318]: 2025-12-09 12:02:01.664467834 +0000 UTC m=+0.225957398 container died 323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32 (image=quay.io/ceph/ceph:v19, name=distracted_kalam, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b196c355165dbda8ea0cb9c4e230f6bb9188fe3b5594f0640ec803b2754b8a5-merged.mount: Deactivated successfully.
Dec 09 12:02:01 compute-0 podman[76318]: 2025-12-09 12:02:01.707508528 +0000 UTC m=+0.268998062 container remove 323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32 (image=quay.io/ceph/ceph:v19, name=distracted_kalam, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:01 compute-0 systemd[1]: libpod-conmon-323717b78170d49e09febc43e6ecf15470b21b8db25b4ab4a2b6261767a56d32.scope: Deactivated successfully.
Dec 09 12:02:01 compute-0 sudo[76159]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 09 12:02:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:01 compute-0 sudo[76406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:01 compute-0 sudo[76406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:01 compute-0 sudo[76406]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:01 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:02:01 compute-0 sudo[76431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 09 12:02:01 compute-0 sudo[76431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:01 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:01 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service crash spec with placement *
Dec 09 12:02:01 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 09 12:02:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 12:02:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:01 compute-0 zealous_mccarthy[76366]: Scheduled crash update...
Dec 09 12:02:02 compute-0 systemd[1]: libpod-1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c.scope: Deactivated successfully.
Dec 09 12:02:02 compute-0 conmon[76366]: conmon 1b93b95f0599d7be9074 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c.scope/container/memory.events
Dec 09 12:02:02 compute-0 podman[76345]: 2025-12-09 12:02:02.013322979 +0000 UTC m=+0.487749390 container died 1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c (image=quay.io/ceph/ceph:v19, name=zealous_mccarthy, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 09 12:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed9e24c71da0d11de1912687e7e6564fb8af3a8d1269ee58195f07a469c0073e-merged.mount: Deactivated successfully.
Dec 09 12:02:02 compute-0 podman[76345]: 2025-12-09 12:02:02.049953188 +0000 UTC m=+0.524379609 container remove 1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c (image=quay.io/ceph/ceph:v19, name=zealous_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 09 12:02:02 compute-0 systemd[1]: libpod-conmon-1b93b95f0599d7be90749be305995a29401950389620b370105843a28bc3aa2c.scope: Deactivated successfully.
Dec 09 12:02:02 compute-0 podman[76472]: 2025-12-09 12:02:02.113350907 +0000 UTC m=+0.041060962 container create 924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966 (image=quay.io/ceph/ceph:v19, name=thirsty_carver, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 12:02:02 compute-0 systemd[1]: Started libpod-conmon-924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966.scope.
Dec 09 12:02:02 compute-0 podman[76472]: 2025-12-09 12:02:02.094042011 +0000 UTC m=+0.021752096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da5ff44a3001c5b00cb21ca5672faa418317ef1e821e5e4450cfcbd8d4e1b8c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da5ff44a3001c5b00cb21ca5672faa418317ef1e821e5e4450cfcbd8d4e1b8c0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da5ff44a3001c5b00cb21ca5672faa418317ef1e821e5e4450cfcbd8d4e1b8c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:02 compute-0 sudo[76431]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:02 compute-0 podman[76472]: 2025-12-09 12:02:02.226298659 +0000 UTC m=+0.154008724 container init 924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966 (image=quay.io/ceph/ceph:v19, name=thirsty_carver, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 12:02:02 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:02 compute-0 podman[76472]: 2025-12-09 12:02:02.233458725 +0000 UTC m=+0.161168780 container start 924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966 (image=quay.io/ceph/ceph:v19, name=thirsty_carver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 09 12:02:02 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:02 compute-0 podman[76472]: 2025-12-09 12:02:02.237095132 +0000 UTC m=+0.164805217 container attach 924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966 (image=quay.io/ceph/ceph:v19, name=thirsty_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:02:02 compute-0 sudo[76513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:02 compute-0 sudo[76513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:02 compute-0 sudo[76513]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:02 compute-0 sudo[76538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 09 12:02:02 compute-0 sudo[76538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:02 compute-0 ceph-mon[74388]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:02 compute-0 ceph-mon[74388]: Saving service mgr spec with placement count:2
Dec 09 12:02:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:02 compute-0 ceph-mon[74388]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:02 compute-0 ceph-mon[74388]: Saving service crash spec with placement *
Dec 09 12:02:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:02 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 09 12:02:02 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3460977930' entity='client.admin' 
Dec 09 12:02:02 compute-0 systemd[1]: libpod-924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966.scope: Deactivated successfully.
Dec 09 12:02:02 compute-0 podman[76472]: 2025-12-09 12:02:02.640491512 +0000 UTC m=+0.568201567 container died 924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966 (image=quay.io/ceph/ceph:v19, name=thirsty_carver, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 12:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-da5ff44a3001c5b00cb21ca5672faa418317ef1e821e5e4450cfcbd8d4e1b8c0-merged.mount: Deactivated successfully.
Dec 09 12:02:02 compute-0 podman[76472]: 2025-12-09 12:02:02.675106647 +0000 UTC m=+0.602816702 container remove 924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966 (image=quay.io/ceph/ceph:v19, name=thirsty_carver, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:02 compute-0 systemd[1]: libpod-conmon-924cb99c3b39236b3437b6a57c44d200a806cdbac5d0c27dba1484f848652966.scope: Deactivated successfully.
Dec 09 12:02:02 compute-0 podman[76633]: 2025-12-09 12:02:02.745661032 +0000 UTC m=+0.043959766 container create 805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27 (image=quay.io/ceph/ceph:v19, name=intelligent_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Dec 09 12:02:02 compute-0 systemd[1]: Started libpod-conmon-805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27.scope.
Dec 09 12:02:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95a1cacf3f79892aa5451b97508429a67c2355881f11d9137aa5bd61836d73c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95a1cacf3f79892aa5451b97508429a67c2355881f11d9137aa5bd61836d73c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95a1cacf3f79892aa5451b97508429a67c2355881f11d9137aa5bd61836d73c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:02 compute-0 podman[76633]: 2025-12-09 12:02:02.82048605 +0000 UTC m=+0.118784804 container init 805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27 (image=quay.io/ceph/ceph:v19, name=intelligent_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec 09 12:02:02 compute-0 podman[76633]: 2025-12-09 12:02:02.725651066 +0000 UTC m=+0.023949830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:02 compute-0 podman[76633]: 2025-12-09 12:02:02.826649023 +0000 UTC m=+0.124947757 container start 805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27 (image=quay.io/ceph/ceph:v19, name=intelligent_pascal, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:02 compute-0 podman[76633]: 2025-12-09 12:02:02.829893664 +0000 UTC m=+0.128192388 container attach 805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27 (image=quay.io/ceph/ceph:v19, name=intelligent_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:02 compute-0 podman[76684]: 2025-12-09 12:02:02.904931192 +0000 UTC m=+0.063568670 container exec a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:02:02 compute-0 podman[76684]: 2025-12-09 12:02:02.997954933 +0000 UTC m=+0.156592421 container exec_died a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:03 compute-0 sudo[76538]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:03 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 09 12:02:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:03 compute-0 sudo[76751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:03 compute-0 sudo[76751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:03 compute-0 systemd[1]: libpod-805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27.scope: Deactivated successfully.
Dec 09 12:02:03 compute-0 podman[76633]: 2025-12-09 12:02:03.196855928 +0000 UTC m=+0.495154662 container died 805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27 (image=quay.io/ceph/ceph:v19, name=intelligent_pascal, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 09 12:02:03 compute-0 sudo[76751]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a95a1cacf3f79892aa5451b97508429a67c2355881f11d9137aa5bd61836d73c-merged.mount: Deactivated successfully.
Dec 09 12:02:03 compute-0 podman[76633]: 2025-12-09 12:02:03.23079054 +0000 UTC m=+0.529089284 container remove 805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27 (image=quay.io/ceph/ceph:v19, name=intelligent_pascal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 09 12:02:03 compute-0 systemd[1]: libpod-conmon-805e8b56bf9946235f6ee3e3b88bfd2745fe27ba64b3edae13acab695d27cc27.scope: Deactivated successfully.
Dec 09 12:02:03 compute-0 sudo[76780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 09 12:02:03 compute-0 sudo[76780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:03 compute-0 podman[76810]: 2025-12-09 12:02:03.300223862 +0000 UTC m=+0.047307231 container create bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7 (image=quay.io/ceph/ceph:v19, name=silly_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 09 12:02:03 compute-0 systemd[1]: Started libpod-conmon-bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7.scope.
Dec 09 12:02:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354a4306b860135a999abdb16163e4dee54607457788e4c9ccddc5e1a62353ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354a4306b860135a999abdb16163e4dee54607457788e4c9ccddc5e1a62353ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354a4306b860135a999abdb16163e4dee54607457788e4c9ccddc5e1a62353ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:03 compute-0 podman[76810]: 2025-12-09 12:02:03.369758438 +0000 UTC m=+0.116841837 container init bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7 (image=quay.io/ceph/ceph:v19, name=silly_goodall, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:03 compute-0 podman[76810]: 2025-12-09 12:02:03.376106297 +0000 UTC m=+0.123189666 container start bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7 (image=quay.io/ceph/ceph:v19, name=silly_goodall, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:03 compute-0 podman[76810]: 2025-12-09 12:02:03.282207928 +0000 UTC m=+0.029291337 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:03 compute-0 podman[76810]: 2025-12-09 12:02:03.379527704 +0000 UTC m=+0.126611103 container attach bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7 (image=quay.io/ceph/ceph:v19, name=silly_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 09 12:02:03 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76850 (sysctl)
Dec 09 12:02:03 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 09 12:02:03 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 09 12:02:03 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3460977930' entity='client.admin' 
Dec 09 12:02:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:03 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 12:02:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:03 compute-0 ceph-mgr[74679]: [cephadm INFO root] Added label _admin to host compute-0
Dec 09 12:02:03 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 09 12:02:03 compute-0 silly_goodall[76830]: Added label _admin to host compute-0
Dec 09 12:02:03 compute-0 sudo[76780]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:03 compute-0 systemd[1]: libpod-bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7.scope: Deactivated successfully.
Dec 09 12:02:03 compute-0 podman[76810]: 2025-12-09 12:02:03.817250021 +0000 UTC m=+0.564333380 container died bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7 (image=quay.io/ceph/ceph:v19, name=silly_goodall, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:02:03 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-354a4306b860135a999abdb16163e4dee54607457788e4c9ccddc5e1a62353ec-merged.mount: Deactivated successfully.
Dec 09 12:02:03 compute-0 podman[76810]: 2025-12-09 12:02:03.859553535 +0000 UTC m=+0.606636904 container remove bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7 (image=quay.io/ceph/ceph:v19, name=silly_goodall, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 09 12:02:03 compute-0 systemd[1]: libpod-conmon-bf2ef7848664d96e1e1fa69cc96bd60b9e5ed9f100ada042e1553a4a217b6be7.scope: Deactivated successfully.
Dec 09 12:02:03 compute-0 sudo[76900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:03 compute-0 sudo[76900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:03 compute-0 sudo[76900]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:03 compute-0 podman[76920]: 2025-12-09 12:02:03.920654477 +0000 UTC m=+0.038272438 container create c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c (image=quay.io/ceph/ceph:v19, name=cool_kepler, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:03 compute-0 systemd[1]: Started libpod-conmon-c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c.scope.
Dec 09 12:02:03 compute-0 sudo[76939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 09 12:02:03 compute-0 sudo[76939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389bf1d0af825379e638b1a52ced6c1edd029eb378ea9c42e21da84c114600ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389bf1d0af825379e638b1a52ced6c1edd029eb378ea9c42e21da84c114600ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389bf1d0af825379e638b1a52ced6c1edd029eb378ea9c42e21da84c114600ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:04 compute-0 podman[76920]: 2025-12-09 12:02:03.904096639 +0000 UTC m=+0.021714620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:04 compute-0 podman[76920]: 2025-12-09 12:02:04.007316949 +0000 UTC m=+0.124934920 container init c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c (image=quay.io/ceph/ceph:v19, name=cool_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 12:02:04 compute-0 podman[76920]: 2025-12-09 12:02:04.017380864 +0000 UTC m=+0.134998825 container start c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c (image=quay.io/ceph/ceph:v19, name=cool_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 09 12:02:04 compute-0 podman[76920]: 2025-12-09 12:02:04.351064316 +0000 UTC m=+0.468682367 container attach c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c (image=quay.io/ceph/ceph:v19, name=cool_kepler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 12:02:04 compute-0 sudo[76939]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:04 compute-0 sudo[77011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:04 compute-0 sudo[77011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:04 compute-0 sudo[77011]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:04 compute-0 sudo[77036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- inventory --format=json-pretty --filter-for-batch
Dec 09 12:02:04 compute-0 sudo[77036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 09 12:02:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/18607144' entity='client.admin' 
Dec 09 12:02:04 compute-0 cool_kepler[76966]: set mgr/dashboard/cluster/status
Dec 09 12:02:04 compute-0 ceph-mon[74388]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:04 compute-0 ceph-mon[74388]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:04 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:04 compute-0 ceph-mon[74388]: Added label _admin to host compute-0
Dec 09 12:02:04 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:04 compute-0 systemd[1]: libpod-c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c.scope: Deactivated successfully.
Dec 09 12:02:04 compute-0 podman[76920]: 2025-12-09 12:02:04.855019697 +0000 UTC m=+0.972637678 container died c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c (image=quay.io/ceph/ceph:v19, name=cool_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-389bf1d0af825379e638b1a52ced6c1edd029eb378ea9c42e21da84c114600ed-merged.mount: Deactivated successfully.
Dec 09 12:02:04 compute-0 podman[76920]: 2025-12-09 12:02:04.988868935 +0000 UTC m=+1.106486906 container remove c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c (image=quay.io/ceph/ceph:v19, name=cool_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 12:02:05 compute-0 systemd[1]: libpod-conmon-c1f720c55a1f415319ab607b53b28718f7745fe5628b4969be9a23194c9db49c.scope: Deactivated successfully.
Dec 09 12:02:05 compute-0 sudo[73332]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:05 compute-0 podman[77113]: 2025-12-09 12:02:05.159137103 +0000 UTC m=+0.039807757 container create a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 09 12:02:05 compute-0 systemd[1]: Started libpod-conmon-a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1.scope.
Dec 09 12:02:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:05 compute-0 podman[77113]: 2025-12-09 12:02:05.141197591 +0000 UTC m=+0.021868285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:05 compute-0 podman[77113]: 2025-12-09 12:02:05.265661266 +0000 UTC m=+0.146331940 container init a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_engelbart, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 12:02:05 compute-0 podman[77113]: 2025-12-09 12:02:05.273201822 +0000 UTC m=+0.153872476 container start a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_engelbart, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 09 12:02:05 compute-0 strange_engelbart[77129]: 167 167
Dec 09 12:02:05 compute-0 systemd[1]: libpod-a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1.scope: Deactivated successfully.
Dec 09 12:02:05 compute-0 conmon[77129]: conmon a5fd7fb6d1868d43c35b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1.scope/container/memory.events
Dec 09 12:02:05 compute-0 podman[77113]: 2025-12-09 12:02:05.30671228 +0000 UTC m=+0.187383024 container attach a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_engelbart, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 12:02:05 compute-0 podman[77113]: 2025-12-09 12:02:05.307589488 +0000 UTC m=+0.188260142 container died a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:05 compute-0 sudo[77170]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwzhjtnkzsykeysqvipglszgduzqglq ; /usr/bin/python3'
Dec 09 12:02:05 compute-0 sudo[77170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffa8f1078c5d9cd2ebc88b31dcebb935155d45753b3dbb8594742f7818e17d65-merged.mount: Deactivated successfully.
Dec 09 12:02:05 compute-0 podman[77113]: 2025-12-09 12:02:05.551981546 +0000 UTC m=+0.432652200 container remove a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_engelbart, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 12:02:05 compute-0 systemd[1]: libpod-conmon-a5fd7fb6d1868d43c35bd5909b76c444316ce4494cbb7ec86a49f2f7fefed5b1.scope: Deactivated successfully.
Dec 09 12:02:05 compute-0 python3[77173]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:02:05 compute-0 podman[77181]: 2025-12-09 12:02:05.755736692 +0000 UTC m=+0.082779901 container create a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 09 12:02:05 compute-0 podman[77181]: 2025-12-09 12:02:05.700999259 +0000 UTC m=+0.028042498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:05 compute-0 systemd[1]: Started libpod-conmon-a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef.scope.
Dec 09 12:02:05 compute-0 podman[77188]: 2025-12-09 12:02:05.718177396 +0000 UTC m=+0.025538430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:05 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:02:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d5534816f83413d028db5d12ae80b60e104e32705f69f40b835a2cfe823dac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d5534816f83413d028db5d12ae80b60e104e32705f69f40b835a2cfe823dac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d5534816f83413d028db5d12ae80b60e104e32705f69f40b835a2cfe823dac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d5534816f83413d028db5d12ae80b60e104e32705f69f40b835a2cfe823dac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:05 compute-0 podman[77188]: 2025-12-09 12:02:05.870155323 +0000 UTC m=+0.177516337 container create 3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54 (image=quay.io/ceph/ceph:v19, name=quirky_burnell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:05 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/18607144' entity='client.admin' 
Dec 09 12:02:05 compute-0 podman[77181]: 2025-12-09 12:02:05.876734279 +0000 UTC m=+0.203777508 container init a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:05 compute-0 podman[77181]: 2025-12-09 12:02:05.887556867 +0000 UTC m=+0.214600076 container start a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lamport, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:05 compute-0 systemd[1]: Started libpod-conmon-3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54.scope.
Dec 09 12:02:05 compute-0 podman[77181]: 2025-12-09 12:02:05.923364888 +0000 UTC m=+0.250408127 container attach a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d740ea7e3df598f3a7870191bf1e1c03cb22c4716446dc83c8d988cc800ef77b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d740ea7e3df598f3a7870191bf1e1c03cb22c4716446dc83c8d988cc800ef77b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:05 compute-0 podman[77188]: 2025-12-09 12:02:05.980342571 +0000 UTC m=+0.287703605 container init 3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54 (image=quay.io/ceph/ceph:v19, name=quirky_burnell, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 09 12:02:05 compute-0 podman[77188]: 2025-12-09 12:02:05.98704891 +0000 UTC m=+0.294409924 container start 3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54 (image=quay.io/ceph/ceph:v19, name=quirky_burnell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 12:02:05 compute-0 podman[77188]: 2025-12-09 12:02:05.990312953 +0000 UTC m=+0.297673987 container attach 3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54 (image=quay.io/ceph/ceph:v19, name=quirky_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 09 12:02:06 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1545362134' entity='client.admin' 
Dec 09 12:02:06 compute-0 systemd[1]: libpod-3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54.scope: Deactivated successfully.
Dec 09 12:02:06 compute-0 podman[77540]: 2025-12-09 12:02:06.582972138 +0000 UTC m=+0.030981989 container died 3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54 (image=quay.io/ceph/ceph:v19, name=quirky_burnell, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 09 12:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d740ea7e3df598f3a7870191bf1e1c03cb22c4716446dc83c8d988cc800ef77b-merged.mount: Deactivated successfully.
Dec 09 12:02:06 compute-0 podman[77540]: 2025-12-09 12:02:06.656409356 +0000 UTC m=+0.104419197 container remove 3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54 (image=quay.io/ceph/ceph:v19, name=quirky_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:06 compute-0 systemd[1]: libpod-conmon-3ebc5ef8f4bc47df478faee2a9cac0d66dc18a5e61bd1d7d653af80060bb8f54.scope: Deactivated successfully.
Dec 09 12:02:06 compute-0 sudo[77170]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:06 compute-0 agitated_lamport[77209]: [
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:     {
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "available": false,
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "being_replaced": false,
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "ceph_device_lvm": false,
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "lsm_data": {},
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "lvs": [],
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "path": "/dev/sr0",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "rejected_reasons": [
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "Insufficient space (<5GB)",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "Has a FileSystem"
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         ],
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         "sys_api": {
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "actuators": null,
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "device_nodes": [
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:                 "sr0"
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             ],
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "devname": "sr0",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "human_readable_size": "482.00 KB",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "id_bus": "ata",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "model": "QEMU DVD-ROM",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "nr_requests": "2",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "parent": "/dev/sr0",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "partitions": {},
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "path": "/dev/sr0",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "removable": "1",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "rev": "2.5+",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "ro": "0",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "rotational": "1",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "sas_address": "",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "sas_device_handle": "",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "scheduler_mode": "mq-deadline",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "sectors": 0,
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "sectorsize": "2048",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "size": 493568.0,
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "support_discard": "2048",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "type": "disk",
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:             "vendor": "QEMU"
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:         }
Dec 09 12:02:06 compute-0 agitated_lamport[77209]:     }
Dec 09 12:02:06 compute-0 agitated_lamport[77209]: ]
Dec 09 12:02:06 compute-0 systemd[1]: libpod-a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef.scope: Deactivated successfully.
Dec 09 12:02:06 compute-0 podman[78400]: 2025-12-09 12:02:06.874432669 +0000 UTC m=+0.030605119 container died a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 12:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-09d5534816f83413d028db5d12ae80b60e104e32705f69f40b835a2cfe823dac-merged.mount: Deactivated successfully.
Dec 09 12:02:06 compute-0 podman[78400]: 2025-12-09 12:02:06.921892294 +0000 UTC m=+0.078064704 container remove a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:06 compute-0 systemd[1]: libpod-conmon-a92771581c5951447036dbb468812900a4e8c961d5bf8781427ca754cd0aa9ef.scope: Deactivated successfully.
Dec 09 12:02:06 compute-0 sudo[77036]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:02:06 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:02:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 09 12:02:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 09 12:02:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:07 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:02:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:07 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:02:07 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:02:07 compute-0 sudo[78414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 12:02:07 compute-0 sudo[78414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78414]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 sudo[78463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph
Dec 09 12:02:07 compute-0 sudo[78463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78463]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 sudo[78516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:02:07 compute-0 sudo[78516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78516]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 sudo[78564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:02:07 compute-0 sudo[78564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78564]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 sudo[78589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:02:07 compute-0 sudo[78589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78589]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 sudo[78637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:02:07 compute-0 sudo[78637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78637]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1545362134' entity='client.admin' 
Dec 09 12:02:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 09 12:02:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:07 compute-0 ceph-mon[74388]: Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:02:07 compute-0 sudo[78685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:02:07 compute-0 sudo[78685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78685]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 sudo[78735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 09 12:02:07 compute-0 ceph-mgr[74679]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 09 12:02:07 compute-0 sudo[78735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78735]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 sudo[78782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agewgdnxsturvyfpxvsjximmhcerbczs ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765281727.216112-37096-117536444465642/async_wrapper.py j236057293917 30 /home/zuul/.ansible/tmp/ansible-tmp-1765281727.216112-37096-117536444465642/AnsiballZ_command.py _'
Dec 09 12:02:07 compute-0 sudo[78782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:07 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:02:07 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:02:07 compute-0 sudo[78787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:02:07 compute-0 sudo[78787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78787]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:07 compute-0 sudo[78812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:02:07 compute-0 sudo[78812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:07 compute-0 sudo[78812]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 ansible-async_wrapper.py[78786]: Invoked with j236057293917 30 /home/zuul/.ansible/tmp/ansible-tmp-1765281727.216112-37096-117536444465642/AnsiballZ_command.py _
Dec 09 12:02:08 compute-0 ansible-async_wrapper.py[78846]: Starting module and watcher
Dec 09 12:02:08 compute-0 ansible-async_wrapper.py[78846]: Start watching 78848 (30)
Dec 09 12:02:08 compute-0 ansible-async_wrapper.py[78848]: Start module (78848)
Dec 09 12:02:08 compute-0 ansible-async_wrapper.py[78786]: Return async_wrapper task started.
Dec 09 12:02:08 compute-0 sudo[78782]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 sudo[78837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:02:08 compute-0 sudo[78837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[78837]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 sudo[78867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:02:08 compute-0 sudo[78867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[78867]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 sudo[78892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:02:08 compute-0 sudo[78892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[78892]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 python3[78854]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:02:08 compute-0 sudo[78940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:02:08 compute-0 sudo[78940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[78940]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 podman[78943]: 2025-12-09 12:02:08.335103098 +0000 UTC m=+0.062603300 container create c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93 (image=quay.io/ceph/ceph:v19, name=recursing_newton, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:02:08 compute-0 systemd[1]: Started libpod-conmon-c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93.scope.
Dec 09 12:02:08 compute-0 sudo[78978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:02:08 compute-0 sudo[78978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[78978]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 podman[78943]: 2025-12-09 12:02:08.307924237 +0000 UTC m=+0.035424469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:08 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219db30ddbc22ef449640e8df38a5cba4dc55d0634a60a3c23dabae435b29f6a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219db30ddbc22ef449640e8df38a5cba4dc55d0634a60a3c23dabae435b29f6a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:08 compute-0 sudo[79008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:02:08 compute-0 sudo[79008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[79008]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:02:08 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:02:08 compute-0 podman[78943]: 2025-12-09 12:02:08.510830317 +0000 UTC m=+0.238330539 container init c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93 (image=quay.io/ceph/ceph:v19, name=recursing_newton, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:08 compute-0 podman[78943]: 2025-12-09 12:02:08.518230248 +0000 UTC m=+0.245730450 container start c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93 (image=quay.io/ceph/ceph:v19, name=recursing_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 09 12:02:08 compute-0 sudo[79033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 12:02:08 compute-0 sudo[79033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 podman[78943]: 2025-12-09 12:02:08.525822965 +0000 UTC m=+0.253323167 container attach c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93 (image=quay.io/ceph/ceph:v19, name=recursing_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 12:02:08 compute-0 sudo[79033]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 sudo[79059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph
Dec 09 12:02:08 compute-0 sudo[79059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[79059]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 sudo[79084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:02:08 compute-0 sudo[79084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[79084]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 sudo[79128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:02:08 compute-0 sudo[79128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[79128]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 sudo[79153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:02:08 compute-0 sudo[79153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[79153]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:08 compute-0 ceph-mon[74388]: Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:02:08 compute-0 sudo[79201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:02:08 compute-0 sudo[79201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:08 compute-0 sudo[79201]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 12:02:09 compute-0 recursing_newton[79004]: 
Dec 09 12:02:09 compute-0 recursing_newton[79004]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 09 12:02:09 compute-0 sudo[79226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:02:09 compute-0 sudo[79226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 systemd[1]: libpod-c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93.scope: Deactivated successfully.
Dec 09 12:02:09 compute-0 podman[78943]: 2025-12-09 12:02:09.030356994 +0000 UTC m=+0.757857196 container died c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93 (image=quay.io/ceph/ceph:v19, name=recursing_newton, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 12:02:09 compute-0 sudo[79226]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 09 12:02:09 compute-0 sudo[79254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 sudo[79254]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:02:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:02:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-219db30ddbc22ef449640e8df38a5cba4dc55d0634a60a3c23dabae435b29f6a-merged.mount: Deactivated successfully.
Dec 09 12:02:09 compute-0 podman[78943]: 2025-12-09 12:02:09.136099132 +0000 UTC m=+0.863599334 container remove c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93 (image=quay.io/ceph/ceph:v19, name=recursing_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:09 compute-0 systemd[1]: libpod-conmon-c95c202b5fdd48c5e1fb02050fbacb8e8382e5c207fa19c490e4e6a9fb681b93.scope: Deactivated successfully.
Dec 09 12:02:09 compute-0 sudo[79310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:02:09 compute-0 sudo[79310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 ansible-async_wrapper.py[78848]: Module complete (78848)
Dec 09 12:02:09 compute-0 sudo[79310]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:02:09 compute-0 sudo[79340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 sudo[79340]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:02:09 compute-0 sudo[79365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 sudo[79365]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:02:09 compute-0 sudo[79390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 sudo[79390]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79450]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmxakfzcfzbizmdzaqidlhkvlcnhiavu ; /usr/bin/python3'
Dec 09 12:02:09 compute-0 sudo[79450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:09 compute-0 sudo[79427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:02:09 compute-0 sudo[79427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 sudo[79427]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:02:09 compute-0 sudo[79489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 python3[79463]: ansible-ansible.legacy.async_status Invoked with jid=j236057293917.78786 mode=status _async_dir=/root/.ansible_async
Dec 09 12:02:09 compute-0 sudo[79489]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79450]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:02:09 compute-0 sudo[79514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 sudo[79514]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:02:09 compute-0 sudo[79560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 sudo[79560]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:02:09 compute-0 sudo[79610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbezeeqpaheoamvvgwmtjdbuzrvwkilu ; /usr/bin/python3'
Dec 09 12:02:09 compute-0 sudo[79610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:09 compute-0 ceph-mgr[74679]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 09 12:02:09 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:09 compute-0 ceph-mon[74388]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 09 12:02:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:02:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:09 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 0c771901-b5df-4697-8d7c-d638c49ea7d2 (Updating crash deployment (+1 -> 1))
Dec 09 12:02:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 09 12:02:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 09 12:02:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 09 12:02:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:09 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:09 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 09 12:02:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 09 12:02:09 compute-0 python3[79612]: ansible-ansible.legacy.async_status Invoked with jid=j236057293917.78786 mode=cleanup _async_dir=/root/.ansible_async
Dec 09 12:02:09 compute-0 sudo[79610]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:09 compute-0 sudo[79613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:09 compute-0 sudo[79613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:09 compute-0 sudo[79613]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:10 compute-0 ceph-mon[74388]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:02:10 compute-0 ceph-mon[74388]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 12:02:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:10 compute-0 ceph-mon[74388]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 09 12:02:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 09 12:02:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 09 12:02:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:10 compute-0 sudo[79638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:02:10 compute-0 sudo[79638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:10 compute-0 sudo[79708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwupfpwccystmvxlgzycxgrzzjvrbwev ; /usr/bin/python3'
Dec 09 12:02:10 compute-0 sudo[79708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:10 compute-0 python3[79715]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 12:02:10 compute-0 podman[79726]: 2025-12-09 12:02:10.438474388 +0000 UTC m=+0.022234807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:10 compute-0 sudo[79708]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:10 compute-0 podman[79726]: 2025-12-09 12:02:10.684348441 +0000 UTC m=+0.268108840 container create 27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:10 compute-0 systemd[1]: Started libpod-conmon-27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8.scope.
Dec 09 12:02:10 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:10 compute-0 podman[79726]: 2025-12-09 12:02:10.904682817 +0000 UTC m=+0.488443236 container init 27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 09 12:02:10 compute-0 podman[79726]: 2025-12-09 12:02:10.911057075 +0000 UTC m=+0.494817484 container start 27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 09 12:02:10 compute-0 optimistic_germain[79744]: 167 167
Dec 09 12:02:10 compute-0 systemd[1]: libpod-27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8.scope: Deactivated successfully.
Dec 09 12:02:10 compute-0 sudo[79771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bphmthyackioqmbxkecieiakcviwhntj ; /usr/bin/python3'
Dec 09 12:02:10 compute-0 sudo[79771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:10 compute-0 podman[79726]: 2025-12-09 12:02:10.960289537 +0000 UTC m=+0.544049956 container attach 27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 09 12:02:10 compute-0 podman[79726]: 2025-12-09 12:02:10.960847174 +0000 UTC m=+0.544607573 container died 27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 09 12:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-617ab95cc5dc245f4929c9baa0c2db7d7ce7099bb739e4b497f07e459c280228-merged.mount: Deactivated successfully.
Dec 09 12:02:11 compute-0 ceph-mon[74388]: Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:02:11 compute-0 ceph-mon[74388]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:11 compute-0 ceph-mon[74388]: Deploying daemon crash.compute-0 on compute-0
Dec 09 12:02:11 compute-0 podman[79726]: 2025-12-09 12:02:11.093192045 +0000 UTC m=+0.676952464 container remove 27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_germain, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 12:02:11 compute-0 systemd[1]: libpod-conmon-27f6dd06af34999a73209c36d60addaafd539aed9447c2d5a65e5594a54203c8.scope: Deactivated successfully.
Dec 09 12:02:11 compute-0 python3[79780]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:02:11 compute-0 podman[79788]: 2025-12-09 12:02:11.178002549 +0000 UTC m=+0.027508341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:11 compute-0 podman[79788]: 2025-12-09 12:02:11.376131829 +0000 UTC m=+0.225637601 container create a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362 (image=quay.io/ceph/ceph:v19, name=nice_margulis, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:11 compute-0 systemd[1]: Started libpod-conmon-a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362.scope.
Dec 09 12:02:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84195ad156f68fc81effc08fa0e731d2c5d2a0c7d17d477dd3311a3c0e7a4f9a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84195ad156f68fc81effc08fa0e731d2c5d2a0c7d17d477dd3311a3c0e7a4f9a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84195ad156f68fc81effc08fa0e731d2c5d2a0c7d17d477dd3311a3c0e7a4f9a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:11 compute-0 podman[79788]: 2025-12-09 12:02:11.648459432 +0000 UTC m=+0.497965234 container init a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362 (image=quay.io/ceph/ceph:v19, name=nice_margulis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:11 compute-0 podman[79788]: 2025-12-09 12:02:11.655989877 +0000 UTC m=+0.505495649 container start a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362 (image=quay.io/ceph/ceph:v19, name=nice_margulis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:11 compute-0 systemd[1]: Reloading.
Dec 09 12:02:11 compute-0 podman[79788]: 2025-12-09 12:02:11.72321514 +0000 UTC m=+0.572720912 container attach a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362 (image=quay.io/ceph/ceph:v19, name=nice_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:11 compute-0 systemd-rc-local-generator[79834]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:02:11 compute-0 systemd-sysv-generator[79839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:02:11 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:12 compute-0 systemd[1]: Reloading.
Dec 09 12:02:12 compute-0 systemd-rc-local-generator[79892]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:02:12 compute-0 systemd-sysv-generator[79895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:02:12 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 12:02:12 compute-0 nice_margulis[79804]: 
Dec 09 12:02:12 compute-0 nice_margulis[79804]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 09 12:02:12 compute-0 podman[79904]: 2025-12-09 12:02:12.222466324 +0000 UTC m=+0.031806667 container died a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362 (image=quay.io/ceph/ceph:v19, name=nice_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 09 12:02:12 compute-0 systemd[1]: libpod-a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362.scope: Deactivated successfully.
Dec 09 12:02:12 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:02:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-84195ad156f68fc81effc08fa0e731d2c5d2a0c7d17d477dd3311a3c0e7a4f9a-merged.mount: Deactivated successfully.
Dec 09 12:02:12 compute-0 podman[79904]: 2025-12-09 12:02:12.401727443 +0000 UTC m=+0.211067766 container remove a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362 (image=quay.io/ceph/ceph:v19, name=nice_margulis, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:12 compute-0 systemd[1]: libpod-conmon-a9f81930d1d1119ff80644b8b8f09ae528aaa404a6ae0e3178cefdd7c7797362.scope: Deactivated successfully.
Dec 09 12:02:12 compute-0 sudo[79771]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:12 compute-0 podman[79971]: 2025-12-09 12:02:12.607040948 +0000 UTC m=+0.074266315 container create 28ec9558b7da152bd441850d4b8b0f1d1d35006c7e363eeac49d60942390571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:12 compute-0 podman[79971]: 2025-12-09 12:02:12.55565 +0000 UTC m=+0.022875357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c47899908838c19f7065cd36550f0b57b707b3991d55cdb4152bf5623016a35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c47899908838c19f7065cd36550f0b57b707b3991d55cdb4152bf5623016a35/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c47899908838c19f7065cd36550f0b57b707b3991d55cdb4152bf5623016a35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c47899908838c19f7065cd36550f0b57b707b3991d55cdb4152bf5623016a35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:12 compute-0 sudo[80012]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mosmtpiaqusrmqmhghnvoympohvwihke ; /usr/bin/python3'
Dec 09 12:02:12 compute-0 sudo[80012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:12 compute-0 podman[79971]: 2025-12-09 12:02:12.813952123 +0000 UTC m=+0.281177490 container init 28ec9558b7da152bd441850d4b8b0f1d1d35006c7e363eeac49d60942390571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:12 compute-0 podman[79971]: 2025-12-09 12:02:12.819806696 +0000 UTC m=+0.287032033 container start 28ec9558b7da152bd441850d4b8b0f1d1d35006c7e363eeac49d60942390571b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:12 compute-0 bash[79971]: 28ec9558b7da152bd441850d4b8b0f1d1d35006c7e363eeac49d60942390571b
Dec 09 12:02:12 compute-0 systemd[1]: Started Ceph crash.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 09 12:02:12 compute-0 python3[80014]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:02:12 compute-0 sudo[79638]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: 2025-12-09T12:02:12.985+0000 7f326f177640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: 2025-12-09T12:02:12.985+0000 7f326f177640 -1 AuthRegistry(0x7f32680698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: 2025-12-09T12:02:12.986+0000 7f326f177640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: 2025-12-09T12:02:12.986+0000 7f326f177640 -1 AuthRegistry(0x7f326f175ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: 2025-12-09T12:02:12.987+0000 7f326ceec640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: 2025-12-09T12:02:12.987+0000 7f326f177640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 09 12:02:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:02:12 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-crash-compute-0[80004]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 09 12:02:13 compute-0 ansible-async_wrapper.py[78846]: Done in kid B.
Dec 09 12:02:13 compute-0 podman[80019]: 2025-12-09 12:02:12.951533359 +0000 UTC m=+0.026966756 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:13 compute-0 podman[80019]: 2025-12-09 12:02:13.102328947 +0000 UTC m=+0.177762314 container create 86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227 (image=quay.io/ceph/ceph:v19, name=hungry_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Dec 09 12:02:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:13 compute-0 ceph-mon[74388]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:13 compute-0 ceph-mon[74388]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 09 12:02:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 12:02:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:13 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 0c771901-b5df-4697-8d7c-d638c49ea7d2 (Updating crash deployment (+1 -> 1))
Dec 09 12:02:13 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 0c771901-b5df-4697-8d7c-d638c49ea7d2 (Updating crash deployment (+1 -> 1)) in 3 seconds
Dec 09 12:02:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 12:02:13 compute-0 systemd[1]: Started libpod-conmon-86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227.scope.
Dec 09 12:02:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbb70a1556180613b45f4403841ff742c015efb40cdaf2be30f7989c0fde2cf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbb70a1556180613b45f4403841ff742c015efb40cdaf2be30f7989c0fde2cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbb70a1556180613b45f4403841ff742c015efb40cdaf2be30f7989c0fde2cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:13 compute-0 podman[80019]: 2025-12-09 12:02:13.223888061 +0000 UTC m=+0.299321448 container init 86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227 (image=quay.io/ceph/ceph:v19, name=hungry_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 09 12:02:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 12:02:13 compute-0 podman[80019]: 2025-12-09 12:02:13.232447208 +0000 UTC m=+0.307880565 container start 86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227 (image=quay.io/ceph/ceph:v19, name=hungry_leavitt, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:13 compute-0 podman[80019]: 2025-12-09 12:02:13.397059419 +0000 UTC m=+0.472492796 container attach 86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227 (image=quay.io/ceph/ceph:v19, name=hungry_leavitt, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:02:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 12:02:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:13 compute-0 sudo[80068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 12:02:13 compute-0 sudo[80068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:13 compute-0 sudo[80068]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 09 12:02:13 compute-0 sudo[80093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:13 compute-0 sudo[80093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:13 compute-0 sudo[80093]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4068725980' entity='client.admin' 
Dec 09 12:02:13 compute-0 sudo[80119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 09 12:02:13 compute-0 sudo[80119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:13 compute-0 systemd[1]: libpod-86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227.scope: Deactivated successfully.
Dec 09 12:02:13 compute-0 conmon[80045]: conmon 86f9fb97adec1879c978 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227.scope/container/memory.events
Dec 09 12:02:13 compute-0 podman[80019]: 2025-12-09 12:02:13.663367673 +0000 UTC m=+0.738801060 container died 86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227 (image=quay.io/ceph/ceph:v19, name=hungry_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 09 12:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdbb70a1556180613b45f4403841ff742c015efb40cdaf2be30f7989c0fde2cf-merged.mount: Deactivated successfully.
Dec 09 12:02:13 compute-0 podman[80019]: 2025-12-09 12:02:13.767394028 +0000 UTC m=+0.842827395 container remove 86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227 (image=quay.io/ceph/ceph:v19, name=hungry_leavitt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Dec 09 12:02:13 compute-0 systemd[1]: libpod-conmon-86f9fb97adec1879c9789da0b1b4dc7446943556d5390078598cbbbe17a6a227.scope: Deactivated successfully.
Dec 09 12:02:13 compute-0 sudo[80012]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:13 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:13 compute-0 sudo[80204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgbntuhhdpskiofdjodbtzxxbghjbkry ; /usr/bin/python3'
Dec 09 12:02:13 compute-0 sudo[80204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:14 compute-0 python3[80211]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:02:14 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4068725980' entity='client.admin' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:14 compute-0 podman[80255]: 2025-12-09 12:02:14.231499381 +0000 UTC m=+0.095570521 container create 354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4 (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 12:02:14 compute-0 podman[80255]: 2025-12-09 12:02:14.16656991 +0000 UTC m=+0.030641080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:14 compute-0 systemd[1]: Started libpod-conmon-354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4.scope.
Dec 09 12:02:14 compute-0 podman[80262]: 2025-12-09 12:02:14.357719121 +0000 UTC m=+0.199843514 container exec a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 12:02:14 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6ffd7e07d920780896c19e7827b8024664e346d830d5e1bf11077dd64b3598/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6ffd7e07d920780896c19e7827b8024664e346d830d5e1bf11077dd64b3598/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6ffd7e07d920780896c19e7827b8024664e346d830d5e1bf11077dd64b3598/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:14 compute-0 podman[80255]: 2025-12-09 12:02:14.391491748 +0000 UTC m=+0.255562898 container init 354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4 (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Dec 09 12:02:14 compute-0 podman[80255]: 2025-12-09 12:02:14.399330953 +0000 UTC m=+0.263402083 container start 354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4 (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 12:02:14 compute-0 podman[80255]: 2025-12-09 12:02:14.433748221 +0000 UTC m=+0.297819361 container attach 354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4 (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:14 compute-0 podman[80262]: 2025-12-09 12:02:14.458740573 +0000 UTC m=+0.300864966 container exec_died a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 12:02:14 compute-0 sudo[80119]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:02:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 09 12:02:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:14 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1376379543' entity='client.admin' 
Dec 09 12:02:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:02:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:02:14 compute-0 systemd[1]: libpod-354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4.scope: Deactivated successfully.
Dec 09 12:02:14 compute-0 podman[80255]: 2025-12-09 12:02:14.83477729 +0000 UTC m=+0.698848420 container died 354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4 (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 09 12:02:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:14 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 1 completed events
Dec 09 12:02:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:02:14 compute-0 sudo[80369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 12:02:14 compute-0 sudo[80369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:14 compute-0 sudo[80369]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d6ffd7e07d920780896c19e7827b8024664e346d830d5e1bf11077dd64b3598-merged.mount: Deactivated successfully.
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec 09 12:02:15 compute-0 podman[80255]: 2025-12-09 12:02:15.204391356 +0000 UTC m=+1.068462486 container remove 354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4 (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 systemd[1]: libpod-conmon-354541d2bb915768b2c88ed3dc6a3d83148e81b1a621026b27593c51392426e4.scope: Deactivated successfully.
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 09 12:02:15 compute-0 sudo[80204]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:15 compute-0 sudo[80400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:15 compute-0 sudo[80400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:15 compute-0 sudo[80400]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:15 compute-0 sudo[80425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:02:15 compute-0 sudo[80425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:15 compute-0 sudo[80473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vudjyvvolbsgebnoxjbbmssxgkwfahcf ; /usr/bin/python3'
Dec 09 12:02:15 compute-0 sudo[80473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:15 compute-0 python3[80475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:02:15 compute-0 podman[80491]: 2025-12-09 12:02:15.656991699 +0000 UTC m=+0.055456786 container create e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459 (image=quay.io/ceph/ceph:v19, name=amazing_cori, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 12:02:15 compute-0 systemd[1]: Started libpod-conmon-e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459.scope.
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1376379543' entity='client.admin' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:15 compute-0 podman[80499]: 2025-12-09 12:02:15.708680787 +0000 UTC m=+0.082759311 container create 9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08 (image=quay.io/ceph/ceph:v19, name=wonderful_chaum, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 12:02:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99f2fe4e6f44a9bacbb05adc87aeb2f652d6baa6a5adf6aafb1b11a544c6826/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99f2fe4e6f44a9bacbb05adc87aeb2f652d6baa6a5adf6aafb1b11a544c6826/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99f2fe4e6f44a9bacbb05adc87aeb2f652d6baa6a5adf6aafb1b11a544c6826/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:15 compute-0 podman[80491]: 2025-12-09 12:02:15.623403088 +0000 UTC m=+0.021868175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:15 compute-0 systemd[1]: Started libpod-conmon-9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08.scope.
Dec 09 12:02:15 compute-0 podman[80491]: 2025-12-09 12:02:15.736617291 +0000 UTC m=+0.135082378 container init e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459 (image=quay.io/ceph/ceph:v19, name=amazing_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 09 12:02:15 compute-0 podman[80491]: 2025-12-09 12:02:15.742440174 +0000 UTC m=+0.140905241 container start e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459 (image=quay.io/ceph/ceph:v19, name=amazing_cori, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 12:02:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:15 compute-0 podman[80491]: 2025-12-09 12:02:15.763934726 +0000 UTC m=+0.162399813 container attach e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459 (image=quay.io/ceph/ceph:v19, name=amazing_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:15 compute-0 podman[80499]: 2025-12-09 12:02:15.779243185 +0000 UTC m=+0.153321739 container init 9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08 (image=quay.io/ceph/ceph:v19, name=wonderful_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:15 compute-0 podman[80499]: 2025-12-09 12:02:15.686106601 +0000 UTC m=+0.060185145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:15 compute-0 podman[80499]: 2025-12-09 12:02:15.785673926 +0000 UTC m=+0.159752450 container start 9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08 (image=quay.io/ceph/ceph:v19, name=wonderful_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:15 compute-0 wonderful_chaum[80526]: 167 167
Dec 09 12:02:15 compute-0 systemd[1]: libpod-9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08.scope: Deactivated successfully.
Dec 09 12:02:15 compute-0 podman[80499]: 2025-12-09 12:02:15.802497032 +0000 UTC m=+0.176575556 container attach 9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08 (image=quay.io/ceph/ceph:v19, name=wonderful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 12:02:15 compute-0 podman[80499]: 2025-12-09 12:02:15.803114272 +0000 UTC m=+0.177192796 container died 9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08 (image=quay.io/ceph/ceph:v19, name=wonderful_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9bf84447d50d1782285c5e5fdae7ac4b60b46af564aaa0c0723461c4af1f37e-merged.mount: Deactivated successfully.
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:15 compute-0 podman[80499]: 2025-12-09 12:02:15.867135635 +0000 UTC m=+0.241214159 container remove 9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08 (image=quay.io/ceph/ceph:v19, name=wonderful_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:15 compute-0 systemd[1]: libpod-conmon-9714261f483aa3d7cbfddf97382f01e92e74bc8759efe8da71da9bd8cdc55a08.scope: Deactivated successfully.
Dec 09 12:02:15 compute-0 sudo[80425]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.wfxreg (unknown last config time)...
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.wfxreg (unknown last config time)...
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.wfxreg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wfxreg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:15 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.wfxreg on compute-0
Dec 09 12:02:15 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.wfxreg on compute-0
Dec 09 12:02:16 compute-0 sudo[80564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:16 compute-0 sudo[80564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:16 compute-0 sudo[80564]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:16 compute-0 sudo[80589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:02:16 compute-0 sudo[80589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 09 12:02:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2427131404' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 09 12:02:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:16 compute-0 podman[80630]: 2025-12-09 12:02:16.455820878 +0000 UTC m=+0.092096034 container create e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4 (image=quay.io/ceph/ceph:v19, name=inspiring_galileo, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 09 12:02:16 compute-0 podman[80630]: 2025-12-09 12:02:16.391530815 +0000 UTC m=+0.027806001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:16 compute-0 systemd[1]: Started libpod-conmon-e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4.scope.
Dec 09 12:02:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:16 compute-0 podman[80630]: 2025-12-09 12:02:16.613316576 +0000 UTC m=+0.249591762 container init e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4 (image=quay.io/ceph/ceph:v19, name=inspiring_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:16 compute-0 podman[80630]: 2025-12-09 12:02:16.620437278 +0000 UTC m=+0.256712434 container start e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4 (image=quay.io/ceph/ceph:v19, name=inspiring_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 09 12:02:16 compute-0 inspiring_galileo[80646]: 167 167
Dec 09 12:02:16 compute-0 podman[80630]: 2025-12-09 12:02:16.625819356 +0000 UTC m=+0.262094682 container attach e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4 (image=quay.io/ceph/ceph:v19, name=inspiring_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 09 12:02:16 compute-0 systemd[1]: libpod-e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4.scope: Deactivated successfully.
Dec 09 12:02:16 compute-0 podman[80630]: 2025-12-09 12:02:16.626662793 +0000 UTC m=+0.262937949 container died e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4 (image=quay.io/ceph/ceph:v19, name=inspiring_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-22fb0f0a033cbd541f75514d336ff286b62123204b630268418891979db45452-merged.mount: Deactivated successfully.
Dec 09 12:02:16 compute-0 podman[80630]: 2025-12-09 12:02:16.66301448 +0000 UTC m=+0.299289636 container remove e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4 (image=quay.io/ceph/ceph:v19, name=inspiring_galileo, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:16 compute-0 systemd[1]: libpod-conmon-e8ab465d5f02eb0eef88e79b9e692b566097b15bf9fa6977f931bd88aeb5d6a4.scope: Deactivated successfully.
Dec 09 12:02:16 compute-0 sudo[80589]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:02:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:02:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:16 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:02:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:02:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:16 compute-0 sudo[80663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 12:02:16 compute-0 sudo[80663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:16 compute-0 sudo[80663]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:17 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 09 12:02:17 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:02:17 compute-0 ceph-mon[74388]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 09 12:02:17 compute-0 ceph-mon[74388]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 09 12:02:17 compute-0 ceph-mon[74388]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:17 compute-0 ceph-mon[74388]: Reconfiguring mgr.compute-0.wfxreg (unknown last config time)...
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wfxreg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:17 compute-0 ceph-mon[74388]: Reconfiguring daemon mgr.compute-0.wfxreg on compute-0
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2427131404' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:17 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:17 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2427131404' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 09 12:02:17 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 09 12:02:17 compute-0 amazing_cori[80521]: set require_min_compat_client to mimic
Dec 09 12:02:17 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 09 12:02:17 compute-0 systemd[1]: libpod-e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459.scope: Deactivated successfully.
Dec 09 12:02:17 compute-0 podman[80491]: 2025-12-09 12:02:17.075722944 +0000 UTC m=+1.474188011 container died e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459 (image=quay.io/ceph/ceph:v19, name=amazing_cori, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 09 12:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99f2fe4e6f44a9bacbb05adc87aeb2f652d6baa6a5adf6aafb1b11a544c6826-merged.mount: Deactivated successfully.
Dec 09 12:02:17 compute-0 podman[80491]: 2025-12-09 12:02:17.191348753 +0000 UTC m=+1.589813820 container remove e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459 (image=quay.io/ceph/ceph:v19, name=amazing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 09 12:02:17 compute-0 systemd[1]: libpod-conmon-e500c22aa89e6525b986cf95ab85023feaa609524663d5f55f1693713fd82459.scope: Deactivated successfully.
Dec 09 12:02:17 compute-0 sudo[80473]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:17 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:17 compute-0 sudo[80725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-robtrphzkbbvxlxnghrwuftgeajkmind ; /usr/bin/python3'
Dec 09 12:02:17 compute-0 sudo[80725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:18 compute-0 python3[80727]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:02:18 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2427131404' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 09 12:02:18 compute-0 ceph-mon[74388]: osdmap e3: 0 total, 0 up, 0 in
Dec 09 12:02:18 compute-0 podman[80728]: 2025-12-09 12:02:18.105077136 +0000 UTC m=+0.048286262 container create 59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718 (image=quay.io/ceph/ceph:v19, name=nice_euclid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 09 12:02:18 compute-0 systemd[1]: Started libpod-conmon-59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718.scope.
Dec 09 12:02:18 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137f9de7936c396b7db3855460763291f0c322f5436f2359c1ce36065f68b22e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137f9de7936c396b7db3855460763291f0c322f5436f2359c1ce36065f68b22e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137f9de7936c396b7db3855460763291f0c322f5436f2359c1ce36065f68b22e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:18 compute-0 podman[80728]: 2025-12-09 12:02:18.086620248 +0000 UTC m=+0.029829394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:18 compute-0 podman[80728]: 2025-12-09 12:02:18.182941582 +0000 UTC m=+0.126150728 container init 59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718 (image=quay.io/ceph/ceph:v19, name=nice_euclid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 09 12:02:18 compute-0 podman[80728]: 2025-12-09 12:02:18.191154289 +0000 UTC m=+0.134363405 container start 59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718 (image=quay.io/ceph/ceph:v19, name=nice_euclid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:18 compute-0 podman[80728]: 2025-12-09 12:02:18.194701151 +0000 UTC m=+0.137910297 container attach 59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718 (image=quay.io/ceph/ceph:v19, name=nice_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 09 12:02:18 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:18 compute-0 sudo[80768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:18 compute-0 sudo[80768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:18 compute-0 sudo[80768]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:18 compute-0 sudo[80793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 09 12:02:18 compute-0 sudo[80793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:19 compute-0 sudo[80793]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 12:02:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 12:02:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 12:02:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 12:02:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: [cephadm INFO root] Added host compute-0
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 09 12:02:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:19 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:02:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:02:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mon[74388]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:19 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:19 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:19 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:19 compute-0 sudo[80838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 12:02:19 compute-0 sudo[80838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:19 compute-0 sudo[80838]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:02:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:02:20 compute-0 ceph-mon[74388]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:02:20 compute-0 ceph-mon[74388]: Added host compute-0
Dec 09 12:02:20 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:20 compute-0 ceph-mon[74388]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:20 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec 09 12:02:20 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec 09 12:02:21 compute-0 ceph-mon[74388]: Deploying cephadm binary to compute-1
Dec 09 12:02:21 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:21 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:22 compute-0 ceph-mon[74388]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:23 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:24 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 12:02:24 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:24 compute-0 ceph-mgr[74679]: [cephadm INFO root] Added host compute-1
Dec 09 12:02:24 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Added host compute-1
Dec 09 12:02:24 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:02:24 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:24 compute-0 ceph-mon[74388]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:24 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:24 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:02:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:25 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec 09 12:02:25 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec 09 12:02:25 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:25 compute-0 ceph-mon[74388]: Added host compute-1
Dec 09 12:02:25 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:26 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:02:26 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:26 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:26 compute-0 ceph-mon[74388]: Deploying cephadm binary to compute-2
Dec 09 12:02:26 compute-0 ceph-mon[74388]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:26 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:27 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:28 compute-0 ceph-mon[74388]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 09 12:02:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: [cephadm INFO root] Added host compute-2
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Added host compute-2
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 12:02:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 12:02:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 09 12:02:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:29 compute-0 nice_euclid[80743]: Added host 'compute-0' with addr '192.168.122.100'
Dec 09 12:02:29 compute-0 nice_euclid[80743]: Added host 'compute-1' with addr '192.168.122.101'
Dec 09 12:02:29 compute-0 nice_euclid[80743]: Added host 'compute-2' with addr '192.168.122.102'
Dec 09 12:02:29 compute-0 nice_euclid[80743]: Scheduled mon update...
Dec 09 12:02:29 compute-0 nice_euclid[80743]: Scheduled mgr update...
Dec 09 12:02:29 compute-0 nice_euclid[80743]: Scheduled osd.default_drive_group update...
Dec 09 12:02:29 compute-0 systemd[1]: libpod-59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718.scope: Deactivated successfully.
Dec 09 12:02:29 compute-0 podman[80728]: 2025-12-09 12:02:29.85128707 +0000 UTC m=+11.794496196 container died 59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718 (image=quay.io/ceph/ceph:v19, name=nice_euclid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:29 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-137f9de7936c396b7db3855460763291f0c322f5436f2359c1ce36065f68b22e-merged.mount: Deactivated successfully.
Dec 09 12:02:29 compute-0 podman[80728]: 2025-12-09 12:02:29.952037035 +0000 UTC m=+11.895246151 container remove 59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718 (image=quay.io/ceph/ceph:v19, name=nice_euclid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:29 compute-0 systemd[1]: libpod-conmon-59b79aa04ab636a10b1a224277e84877f815f64dd3735b04185ef8e47c197718.scope: Deactivated successfully.
Dec 09 12:02:29 compute-0 sudo[80725]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:30 compute-0 sudo[80899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njglixoegbeujxnwcntjugmlmwzhkzzc ; /usr/bin/python3'
Dec 09 12:02:30 compute-0 sudo[80899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:02:30 compute-0 python3[80901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:02:30 compute-0 podman[80903]: 2025-12-09 12:02:30.464547879 +0000 UTC m=+0.042632360 container create e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad (image=quay.io/ceph/ceph:v19, name=frosty_brahmagupta, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 12:02:30 compute-0 systemd[1]: Started libpod-conmon-e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad.scope.
Dec 09 12:02:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b72696fc22e98c5c041fedbeded4f721861ec6d8fe820f03c8a6b3f95024b9b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b72696fc22e98c5c041fedbeded4f721861ec6d8fe820f03c8a6b3f95024b9b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b72696fc22e98c5c041fedbeded4f721861ec6d8fe820f03c8a6b3f95024b9b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:30 compute-0 podman[80903]: 2025-12-09 12:02:30.444487431 +0000 UTC m=+0.022571932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:02:30 compute-0 podman[80903]: 2025-12-09 12:02:30.547372706 +0000 UTC m=+0.125457207 container init e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad (image=quay.io/ceph/ceph:v19, name=frosty_brahmagupta, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:30 compute-0 podman[80903]: 2025-12-09 12:02:30.555634687 +0000 UTC m=+0.133719168 container start e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad (image=quay.io/ceph/ceph:v19, name=frosty_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:30 compute-0 podman[80903]: 2025-12-09 12:02:30.559226025 +0000 UTC m=+0.137310526 container attach e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad (image=quay.io/ceph/ceph:v19, name=frosty_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Dec 09 12:02:30 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:30 compute-0 ceph-mon[74388]: Added host compute-2
Dec 09 12:02:30 compute-0 ceph-mon[74388]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:30 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:30 compute-0 ceph-mon[74388]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:30 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:30 compute-0 ceph-mon[74388]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 09 12:02:30 compute-0 ceph-mon[74388]: Marking host: compute-1 for OSDSpec preview refresh.
Dec 09 12:02:30 compute-0 ceph-mon[74388]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 09 12:02:30 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:30 compute-0 ceph-mon[74388]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 09 12:02:30 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259871925' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 09 12:02:30 compute-0 frosty_brahmagupta[80919]: 
Dec 09 12:02:30 compute-0 frosty_brahmagupta[80919]: {"fsid":"750b57e3-924f-51a5-ab09-01517535f732","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":64,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-09T12:01:24:354878+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-09T12:01:24.356907+0000","services":{}},"progress_events":{}}
Dec 09 12:02:30 compute-0 systemd[1]: libpod-e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad.scope: Deactivated successfully.
Dec 09 12:02:30 compute-0 podman[80903]: 2025-12-09 12:02:30.992686365 +0000 UTC m=+0.570770856 container died e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad (image=quay.io/ceph/ceph:v19, name=frosty_brahmagupta, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b72696fc22e98c5c041fedbeded4f721861ec6d8fe820f03c8a6b3f95024b9b-merged.mount: Deactivated successfully.
Dec 09 12:02:31 compute-0 podman[80903]: 2025-12-09 12:02:31.02880849 +0000 UTC m=+0.606892971 container remove e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad (image=quay.io/ceph/ceph:v19, name=frosty_brahmagupta, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:31 compute-0 systemd[1]: libpod-conmon-e643387126d2f3bd6e3debd51f5a7ee2eaa506e8a961381050f193e791cafdad.scope: Deactivated successfully.
Dec 09 12:02:31 compute-0 sudo[80899]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:31 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4259871925' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 09 12:02:31 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:32 compute-0 ceph-mon[74388]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:33 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:34 compute-0 ceph-mon[74388]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:35 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:36 compute-0 ceph-mon[74388]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:37 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:38 compute-0 ceph-mon[74388]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:39 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:40 compute-0 ceph-mon[74388]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:41 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:42 compute-0 ceph-mon[74388]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:43 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:44 compute-0 ceph-mon[74388]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:45 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:02:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:02:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:02:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:02:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 09 12:02:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 09 12:02:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:46 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:02:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:46 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:02:46 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:02:46 compute-0 ceph-mon[74388]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:46 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:46 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:46 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:46 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:46 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 09 12:02:46 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:46 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:02:47 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:02:47 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:02:47 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:02:47 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:02:47 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:47 compute-0 ceph-mon[74388]: Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:02:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:02:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:02:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:02:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:02:48.806+0000 7ff251665640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev a599e1c0-4404-4619-9cd6-0c959f55e8a0 (Updating crash deployment (+1 -> 2))
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: service_name: mon
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: placement:
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   hosts:
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   - compute-0
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   - compute-1
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   - compute-2
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:02:48.806+0000 7ff251665640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: service_name: mgr
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: placement:
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   hosts:
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   - compute-0
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   - compute-1
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   - compute-2
Dec 09 12:02:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 09 12:02:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 09 12:02:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 09 12:02:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 09 12:02:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec 09 12:02:48 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec 09 12:02:48 compute-0 ceph-mon[74388]: Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:02:48 compute-0 ceph-mon[74388]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:02:48 compute-0 ceph-mon[74388]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:48 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:48 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:48 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:48 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 09 12:02:48 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 09 12:02:48 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [balancer INFO root] Optimize plan auto_2025-12-09_12:02:49
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [balancer INFO root] do_upmap
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [balancer INFO root] No pools available
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:02:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:02:49 compute-0 ceph-mon[74388]: Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:02:49 compute-0 ceph-mon[74388]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 09 12:02:49 compute-0 ceph-mon[74388]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:49 compute-0 ceph-mon[74388]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 09 12:02:49 compute-0 ceph-mon[74388]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:49 compute-0 ceph-mon[74388]: Deploying daemon crash.compute-1 on compute-1
Dec 09 12:02:49 compute-0 ceph-mon[74388]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 09 12:02:50 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:51 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev a599e1c0-4404-4619-9cd6-0c959f55e8a0 (Updating crash deployment (+1 -> 2))
Dec 09 12:02:51 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event a599e1c0-4404-4619-9cd6-0c959f55e8a0 (Updating crash deployment (+1 -> 2)) in 2 seconds
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:51 compute-0 sudo[80956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:51 compute-0 sudo[80956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:51 compute-0 sudo[80956]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:51 compute-0 sudo[80981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 09 12:02:51 compute-0 sudo[80981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:51 compute-0 podman[81045]: 2025-12-09 12:02:51.679346247 +0000 UTC m=+0.042450124 container create b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 09 12:02:51 compute-0 systemd[1]: Started libpod-conmon-b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f.scope.
Dec 09 12:02:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:51 compute-0 podman[81045]: 2025-12-09 12:02:51.746626564 +0000 UTC m=+0.109730461 container init b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:02:51 compute-0 podman[81045]: 2025-12-09 12:02:51.751940018 +0000 UTC m=+0.115043895 container start b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 12:02:51 compute-0 podman[81045]: 2025-12-09 12:02:51.6590197 +0000 UTC m=+0.022123627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:51 compute-0 podman[81045]: 2025-12-09 12:02:51.755378461 +0000 UTC m=+0.118482368 container attach b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 09 12:02:51 compute-0 unruffled_sinoussi[81061]: 167 167
Dec 09 12:02:51 compute-0 systemd[1]: libpod-b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f.scope: Deactivated successfully.
Dec 09 12:02:51 compute-0 podman[81045]: 2025-12-09 12:02:51.75747699 +0000 UTC m=+0.120580897 container died b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sinoussi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 12:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba22252f793ee370dc3b59ec77496cb8dc67de76b3097b30d24fa75e9eae0a25-merged.mount: Deactivated successfully.
Dec 09 12:02:51 compute-0 podman[81045]: 2025-12-09 12:02:51.791571628 +0000 UTC m=+0.154675505 container remove b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sinoussi, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:51 compute-0 systemd[1]: libpod-conmon-b889c6fcdf0502b6f7e641c7526649cc83277528f2c382f5e85e274c32cbdd4f.scope: Deactivated successfully.
Dec 09 12:02:51 compute-0 podman[81084]: 2025-12-09 12:02:51.935906314 +0000 UTC m=+0.038411942 container create a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:02:51 compute-0 systemd[1]: Started libpod-conmon-a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7.scope.
Dec 09 12:02:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8124631cec4d83bcdaab14eb1bbb378594722c4015b9f43398ebee593471d08f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8124631cec4d83bcdaab14eb1bbb378594722c4015b9f43398ebee593471d08f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8124631cec4d83bcdaab14eb1bbb378594722c4015b9f43398ebee593471d08f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:52 compute-0 ceph-mon[74388]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8124631cec4d83bcdaab14eb1bbb378594722c4015b9f43398ebee593471d08f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8124631cec4d83bcdaab14eb1bbb378594722c4015b9f43398ebee593471d08f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:52 compute-0 podman[81084]: 2025-12-09 12:02:52.012858458 +0000 UTC m=+0.115364106 container init a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_euler, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Dec 09 12:02:52 compute-0 podman[81084]: 2025-12-09 12:02:51.919385371 +0000 UTC m=+0.021891029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:52 compute-0 podman[81084]: 2025-12-09 12:02:52.022548416 +0000 UTC m=+0.125054054 container start a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:52 compute-0 podman[81084]: 2025-12-09 12:02:52.02723247 +0000 UTC m=+0.129738128 container attach a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:52 compute-0 clever_euler[81100]: --> passed data devices: 0 physical, 1 LVM
Dec 09 12:02:52 compute-0 clever_euler[81100]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:02:52 compute-0 clever_euler[81100]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:02:52 compute-0 clever_euler[81100]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0cb4756c-1cb3-414f-a66b-4ca287023452
Dec 09 12:02:52 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "90e37dc6-712e-48c9-9312-9b917a38a95d"} v 0)
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3669521524' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "90e37dc6-712e-48c9-9312-9b917a38a95d"}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "0cb4756c-1cb3-414f-a66b-4ca287023452"} v 0)
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4063845356' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0cb4756c-1cb3-414f-a66b-4ca287023452"}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3669521524' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "90e37dc6-712e-48c9-9312-9b917a38a95d"}]': finished
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4063845356' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0cb4756c-1cb3-414f-a66b-4ca287023452"}]': finished
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:02:52 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:02:52 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:02:52 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 12:02:53 compute-0 ceph-mon[74388]: from='client.? 192.168.122.101:0/3669521524' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "90e37dc6-712e-48c9-9312-9b917a38a95d"}]: dispatch
Dec 09 12:02:53 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4063845356' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0cb4756c-1cb3-414f-a66b-4ca287023452"}]: dispatch
Dec 09 12:02:53 compute-0 ceph-mon[74388]: from='client.? 192.168.122.101:0/3669521524' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "90e37dc6-712e-48c9-9312-9b917a38a95d"}]': finished
Dec 09 12:02:53 compute-0 ceph-mon[74388]: osdmap e4: 1 total, 0 up, 1 in
Dec 09 12:02:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:02:53 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4063845356' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0cb4756c-1cb3-414f-a66b-4ca287023452"}]': finished
Dec 09 12:02:53 compute-0 ceph-mon[74388]: osdmap e5: 2 total, 0 up, 2 in
Dec 09 12:02:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:02:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:02:53 compute-0 clever_euler[81100]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 09 12:02:53 compute-0 clever_euler[81100]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 09 12:02:53 compute-0 clever_euler[81100]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 09 12:02:53 compute-0 clever_euler[81100]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 09 12:02:53 compute-0 clever_euler[81100]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 09 12:02:53 compute-0 lvm[81161]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:02:53 compute-0 lvm[81161]: VG ceph_vg0 finished
Dec 09 12:02:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 09 12:02:53 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1053072884' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 09 12:02:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 09 12:02:53 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604053339' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 09 12:02:53 compute-0 clever_euler[81100]:  stderr: got monmap epoch 1
Dec 09 12:02:53 compute-0 clever_euler[81100]: --> Creating keyring file for osd.1
Dec 09 12:02:53 compute-0 clever_euler[81100]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 09 12:02:53 compute-0 clever_euler[81100]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 09 12:02:53 compute-0 clever_euler[81100]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 0cb4756c-1cb3-414f-a66b-4ca287023452 --setuser ceph --setgroup ceph
Dec 09 12:02:54 compute-0 ceph-mon[74388]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:54 compute-0 ceph-mon[74388]: from='client.? 192.168.122.101:0/1053072884' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 09 12:02:54 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1604053339' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 09 12:02:54 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:55 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 09 12:02:55 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 2 completed events
Dec 09 12:02:55 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:02:55 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:55 compute-0 ceph-mon[74388]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 09 12:02:55 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:02:56 compute-0 ceph-mon[74388]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:56 compute-0 clever_euler[81100]:  stderr: 2025-12-09T12:02:53.699+0000 7fa5876b6740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 09 12:02:56 compute-0 clever_euler[81100]:  stderr: 2025-12-09T12:02:53.965+0000 7fa5876b6740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 09 12:02:56 compute-0 clever_euler[81100]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 09 12:02:56 compute-0 clever_euler[81100]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 12:02:56 compute-0 clever_euler[81100]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 09 12:02:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:02:56 compute-0 clever_euler[81100]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 09 12:02:56 compute-0 clever_euler[81100]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 09 12:02:56 compute-0 clever_euler[81100]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 09 12:02:56 compute-0 clever_euler[81100]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 12:02:56 compute-0 clever_euler[81100]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 09 12:02:56 compute-0 clever_euler[81100]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 09 12:02:56 compute-0 systemd[1]: libpod-a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7.scope: Deactivated successfully.
Dec 09 12:02:56 compute-0 systemd[1]: libpod-a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7.scope: Consumed 2.113s CPU time.
Dec 09 12:02:56 compute-0 podman[82076]: 2025-12-09 12:02:56.701110062 +0000 UTC m=+0.029101465 container died a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_euler, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 12:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8124631cec4d83bcdaab14eb1bbb378594722c4015b9f43398ebee593471d08f-merged.mount: Deactivated successfully.
Dec 09 12:02:56 compute-0 podman[82076]: 2025-12-09 12:02:56.747119562 +0000 UTC m=+0.075110945 container remove a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_euler, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:56 compute-0 systemd[1]: libpod-conmon-a6b2c3965989410b4195137088774fcbee481e5ef030888d57dc749a8d02e5b7.scope: Deactivated successfully.
Dec 09 12:02:56 compute-0 sudo[80981]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:56 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:56 compute-0 sudo[82091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:56 compute-0 sudo[82091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:56 compute-0 sudo[82091]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:56 compute-0 sudo[82116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- lvm list --format json
Dec 09 12:02:56 compute-0 sudo[82116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:57 compute-0 ceph-mon[74388]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:57 compute-0 podman[82182]: 2025-12-09 12:02:57.270327436 +0000 UTC m=+0.040461868 container create 61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 12:02:57 compute-0 systemd[1]: Started libpod-conmon-61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57.scope.
Dec 09 12:02:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:57 compute-0 podman[82182]: 2025-12-09 12:02:57.34025509 +0000 UTC m=+0.110389562 container init 61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:57 compute-0 podman[82182]: 2025-12-09 12:02:57.347433586 +0000 UTC m=+0.117568018 container start 61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:57 compute-0 podman[82182]: 2025-12-09 12:02:57.253613778 +0000 UTC m=+0.023748240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:57 compute-0 festive_jepsen[82198]: 167 167
Dec 09 12:02:57 compute-0 podman[82182]: 2025-12-09 12:02:57.351201609 +0000 UTC m=+0.121336061 container attach 61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:57 compute-0 systemd[1]: libpod-61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57.scope: Deactivated successfully.
Dec 09 12:02:57 compute-0 podman[82182]: 2025-12-09 12:02:57.352526333 +0000 UTC m=+0.122660765 container died 61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-af7af117ba6165c80f16d9ed7df31fbb9c3db1fae1f6b86483b0eefd52604376-merged.mount: Deactivated successfully.
Dec 09 12:02:57 compute-0 podman[82182]: 2025-12-09 12:02:57.382082672 +0000 UTC m=+0.152217104 container remove 61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:57 compute-0 systemd[1]: libpod-conmon-61487b6221a1523eaad23b378137337bba248b2f8284823402f79121119dcf57.scope: Deactivated successfully.
Dec 09 12:02:57 compute-0 podman[82221]: 2025-12-09 12:02:57.531909058 +0000 UTC m=+0.037697098 container create 87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 09 12:02:57 compute-0 systemd[1]: Started libpod-conmon-87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4.scope.
Dec 09 12:02:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc19a4b22aca599886108c1d21883d779139191a9695ca1c290d4770728891c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:57 compute-0 podman[82221]: 2025-12-09 12:02:57.51733516 +0000 UTC m=+0.023123220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc19a4b22aca599886108c1d21883d779139191a9695ca1c290d4770728891c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc19a4b22aca599886108c1d21883d779139191a9695ca1c290d4770728891c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc19a4b22aca599886108c1d21883d779139191a9695ca1c290d4770728891c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:57 compute-0 podman[82221]: 2025-12-09 12:02:57.620710091 +0000 UTC m=+0.126498151 container init 87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_poincare, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 09 12:02:57 compute-0 podman[82221]: 2025-12-09 12:02:57.628937291 +0000 UTC m=+0.134725331 container start 87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_poincare, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 09 12:02:57 compute-0 podman[82221]: 2025-12-09 12:02:57.631983481 +0000 UTC m=+0.137771531 container attach 87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_poincare, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 09 12:02:57 compute-0 friendly_poincare[82237]: {
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:     "1": [
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:         {
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "devices": [
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "/dev/loop3"
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             ],
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "lv_name": "ceph_lv0",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "lv_size": "21470642176",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=750b57e3-924f-51a5-ab09-01517535f732,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0cb4756c-1cb3-414f-a66b-4ca287023452,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "lv_uuid": "NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "name": "ceph_lv0",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "tags": {
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.block_uuid": "NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.cluster_fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.cluster_name": "ceph",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.crush_device_class": "",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.encrypted": "0",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.osd_fsid": "0cb4756c-1cb3-414f-a66b-4ca287023452",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.osd_id": "1",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.type": "block",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.vdo": "0",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:                 "ceph.with_tpm": "0"
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             },
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "type": "block",
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:             "vg_name": "ceph_vg0"
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:         }
Dec 09 12:02:57 compute-0 friendly_poincare[82237]:     ]
Dec 09 12:02:57 compute-0 friendly_poincare[82237]: }
Dec 09 12:02:57 compute-0 systemd[1]: libpod-87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4.scope: Deactivated successfully.
Dec 09 12:02:57 compute-0 podman[82221]: 2025-12-09 12:02:57.926427271 +0000 UTC m=+0.432215341 container died 87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc19a4b22aca599886108c1d21883d779139191a9695ca1c290d4770728891c5-merged.mount: Deactivated successfully.
Dec 09 12:02:57 compute-0 podman[82221]: 2025-12-09 12:02:57.964614603 +0000 UTC m=+0.470402643 container remove 87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_poincare, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 09 12:02:57 compute-0 systemd[1]: libpod-conmon-87885d4d865074d983575c8dcf0b4cc123fa58ff75099a4292d8cc0f055cafb4.scope: Deactivated successfully.
Dec 09 12:02:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 09 12:02:57 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 09 12:02:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:57 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:57 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Dec 09 12:02:57 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Dec 09 12:02:58 compute-0 sudo[82116]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:58 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 09 12:02:58 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 09 12:02:58 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:02:58 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:58 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 09 12:02:58 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 09 12:02:58 compute-0 sudo[82258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:02:58 compute-0 sudo[82258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:58 compute-0 sudo[82258]: pam_unix(sudo:session): session closed for user root
Dec 09 12:02:58 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 09 12:02:58 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:58 compute-0 ceph-mon[74388]: Deploying daemon osd.0 on compute-1
Dec 09 12:02:58 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 09 12:02:58 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:02:58 compute-0 ceph-mon[74388]: Deploying daemon osd.1 on compute-0
Dec 09 12:02:58 compute-0 sudo[82283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:02:58 compute-0 sudo[82283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:02:58 compute-0 podman[82350]: 2025-12-09 12:02:58.479317778 +0000 UTC m=+0.039141545 container create a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_curran, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 12:02:58 compute-0 systemd[1]: Started libpod-conmon-a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f.scope.
Dec 09 12:02:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:58 compute-0 podman[82350]: 2025-12-09 12:02:58.555740796 +0000 UTC m=+0.115564573 container init a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:02:58 compute-0 podman[82350]: 2025-12-09 12:02:58.461183454 +0000 UTC m=+0.021007241 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:58 compute-0 podman[82350]: 2025-12-09 12:02:58.563080216 +0000 UTC m=+0.122903973 container start a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:02:58 compute-0 podman[82350]: 2025-12-09 12:02:58.566526379 +0000 UTC m=+0.126350196 container attach a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_curran, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:02:58 compute-0 compassionate_curran[82366]: 167 167
Dec 09 12:02:58 compute-0 systemd[1]: libpod-a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f.scope: Deactivated successfully.
Dec 09 12:02:58 compute-0 podman[82350]: 2025-12-09 12:02:58.569950552 +0000 UTC m=+0.129774329 container died a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 09 12:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b189e9e3f2292ebb39f49392eb302b635d33d9ca84ea65545555bb8452afb90-merged.mount: Deactivated successfully.
Dec 09 12:02:58 compute-0 podman[82350]: 2025-12-09 12:02:58.6153009 +0000 UTC m=+0.175124667 container remove a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 12:02:58 compute-0 systemd[1]: libpod-conmon-a1ac1b0074a547aeb098c912475519df0871d4abd569bae7253e25504eb9b42f.scope: Deactivated successfully.
Dec 09 12:02:58 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:58 compute-0 podman[82395]: 2025-12-09 12:02:58.882094542 +0000 UTC m=+0.048596935 container create c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:58 compute-0 systemd[1]: Started libpod-conmon-c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae.scope.
Dec 09 12:02:58 compute-0 podman[82395]: 2025-12-09 12:02:58.862184108 +0000 UTC m=+0.028686491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:02:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72f6f1f23451efa131a7f68d94cc5df79eeaa18e13e8e47b605e240522e044d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72f6f1f23451efa131a7f68d94cc5df79eeaa18e13e8e47b605e240522e044d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72f6f1f23451efa131a7f68d94cc5df79eeaa18e13e8e47b605e240522e044d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72f6f1f23451efa131a7f68d94cc5df79eeaa18e13e8e47b605e240522e044d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72f6f1f23451efa131a7f68d94cc5df79eeaa18e13e8e47b605e240522e044d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:02:58 compute-0 podman[82395]: 2025-12-09 12:02:58.980566773 +0000 UTC m=+0.147069166 container init c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:02:58 compute-0 podman[82395]: 2025-12-09 12:02:58.993666152 +0000 UTC m=+0.160168525 container start c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 09 12:02:58 compute-0 podman[82395]: 2025-12-09 12:02:58.997631912 +0000 UTC m=+0.164134275 container attach c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:02:59 compute-0 ceph-mon[74388]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:02:59 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test[82412]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 09 12:02:59 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test[82412]:                             [--no-systemd] [--no-tmpfs]
Dec 09 12:02:59 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test[82412]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 09 12:02:59 compute-0 systemd[1]: libpod-c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae.scope: Deactivated successfully.
Dec 09 12:02:59 compute-0 podman[82417]: 2025-12-09 12:02:59.228831337 +0000 UTC m=+0.024573607 container died c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b72f6f1f23451efa131a7f68d94cc5df79eeaa18e13e8e47b605e240522e044d-merged.mount: Deactivated successfully.
Dec 09 12:02:59 compute-0 podman[82417]: 2025-12-09 12:02:59.267846648 +0000 UTC m=+0.063588898 container remove c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:02:59 compute-0 systemd[1]: libpod-conmon-c6d09010acb1fdbc7701f4e36ef21fd778e6808d092640eae009aa9c436bb1ae.scope: Deactivated successfully.
Dec 09 12:02:59 compute-0 systemd[1]: Reloading.
Dec 09 12:02:59 compute-0 systemd-sysv-generator[82478]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:02:59 compute-0 systemd-rc-local-generator[82475]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:02:59 compute-0 systemd[1]: Reloading.
Dec 09 12:02:59 compute-0 systemd-rc-local-generator[82517]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:02:59 compute-0 systemd-sysv-generator[82521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:03:00 compute-0 systemd[1]: Starting Ceph osd.1 for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:03:00 compute-0 podman[82575]: 2025-12-09 12:03:00.243893568 +0000 UTC m=+0.045398241 container create 74fa1defde7e6ba438520786a4101cd97f281c4d3a49b7c758db78861a36e243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 12:03:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/473211f932fd87a4dc458ae6d4938c6080a63963645f9d53b979d0dd712dba20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/473211f932fd87a4dc458ae6d4938c6080a63963645f9d53b979d0dd712dba20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/473211f932fd87a4dc458ae6d4938c6080a63963645f9d53b979d0dd712dba20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/473211f932fd87a4dc458ae6d4938c6080a63963645f9d53b979d0dd712dba20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/473211f932fd87a4dc458ae6d4938c6080a63963645f9d53b979d0dd712dba20/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:00 compute-0 podman[82575]: 2025-12-09 12:03:00.321051139 +0000 UTC m=+0.122555832 container init 74fa1defde7e6ba438520786a4101cd97f281c4d3a49b7c758db78861a36e243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:03:00 compute-0 podman[82575]: 2025-12-09 12:03:00.225801214 +0000 UTC m=+0.027305907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:00 compute-0 podman[82575]: 2025-12-09 12:03:00.328834605 +0000 UTC m=+0.130339278 container start 74fa1defde7e6ba438520786a4101cd97f281c4d3a49b7c758db78861a36e243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 09 12:03:00 compute-0 podman[82575]: 2025-12-09 12:03:00.332895227 +0000 UTC m=+0.134399900 container attach 74fa1defde7e6ba438520786a4101cd97f281c4d3a49b7c758db78861a36e243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec 09 12:03:00 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:03:00 compute-0 bash[82575]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:03:00 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:03:00 compute-0 bash[82575]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:03:00 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:00 compute-0 lvm[82672]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:03:00 compute-0 lvm[82672]: VG ceph_vg0 finished
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:03:01 compute-0 bash[82575]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 09 12:03:01 compute-0 bash[82575]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:03:01 compute-0 bash[82575]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 12:03:01 compute-0 bash[82575]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 09 12:03:01 compute-0 bash[82575]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 09 12:03:01 compute-0 sudo[82799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwydgtgathqqibwrtmrzdongicwygnsi ; /usr/bin/python3'
Dec 09 12:03:01 compute-0 sudo[82799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:01 compute-0 python3[82801]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:01 compute-0 podman[82806]: 2025-12-09 12:03:01.392054585 +0000 UTC m=+0.046904760 container create c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373 (image=quay.io/ceph/ceph:v19, name=hardcore_noether, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 12:03:01 compute-0 systemd[1]: Started libpod-conmon-c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373.scope.
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:01 compute-0 bash[82575]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:01 compute-0 podman[82806]: 2025-12-09 12:03:01.371893153 +0000 UTC m=+0.026743358 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:01 compute-0 bash[82575]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 09 12:03:01 compute-0 bash[82575]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 12:03:01 compute-0 bash[82575]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 09 12:03:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:01 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate[82591]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 09 12:03:01 compute-0 bash[82575]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 09 12:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736c6a5dea499770406b97d1bdc7d4553dbc56c91a17d7ba279cd88c74baafdf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736c6a5dea499770406b97d1bdc7d4553dbc56c91a17d7ba279cd88c74baafdf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736c6a5dea499770406b97d1bdc7d4553dbc56c91a17d7ba279cd88c74baafdf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:01 compute-0 systemd[1]: libpod-74fa1defde7e6ba438520786a4101cd97f281c4d3a49b7c758db78861a36e243.scope: Deactivated successfully.
Dec 09 12:03:01 compute-0 systemd[1]: libpod-74fa1defde7e6ba438520786a4101cd97f281c4d3a49b7c758db78861a36e243.scope: Consumed 1.315s CPU time.
Dec 09 12:03:01 compute-0 podman[82806]: 2025-12-09 12:03:01.771629007 +0000 UTC m=+0.426479202 container init c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373 (image=quay.io/ceph/ceph:v19, name=hardcore_noether, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec 09 12:03:01 compute-0 podman[82806]: 2025-12-09 12:03:01.779318339 +0000 UTC m=+0.434168534 container start c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373 (image=quay.io/ceph/ceph:v19, name=hardcore_noether, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:03:01 compute-0 podman[82806]: 2025-12-09 12:03:01.784638614 +0000 UTC m=+0.439488789 container attach c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373 (image=quay.io/ceph/ceph:v19, name=hardcore_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 09 12:03:01 compute-0 podman[82575]: 2025-12-09 12:03:01.841453307 +0000 UTC m=+1.642957990 container died 74fa1defde7e6ba438520786a4101cd97f281c4d3a49b7c758db78861a36e243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 12:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-473211f932fd87a4dc458ae6d4938c6080a63963645f9d53b979d0dd712dba20-merged.mount: Deactivated successfully.
Dec 09 12:03:01 compute-0 ceph-mon[74388]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:01 compute-0 podman[82829]: 2025-12-09 12:03:01.886561107 +0000 UTC m=+0.357887491 container remove 74fa1defde7e6ba438520786a4101cd97f281c4d3a49b7c758db78861a36e243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 12:03:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:03:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:03:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:02 compute-0 podman[82903]: 2025-12-09 12:03:02.069256931 +0000 UTC m=+0.041340647 container create 9be3c7a3513a58cb57a967d61fe852e6977c03d68395aa00adcc4d8a0c357943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 09 12:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fc09ff4820a6deb66f5ff06ea3451f165183c19fdb7e7a0b531acf32edbac6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fc09ff4820a6deb66f5ff06ea3451f165183c19fdb7e7a0b531acf32edbac6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fc09ff4820a6deb66f5ff06ea3451f165183c19fdb7e7a0b531acf32edbac6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fc09ff4820a6deb66f5ff06ea3451f165183c19fdb7e7a0b531acf32edbac6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fc09ff4820a6deb66f5ff06ea3451f165183c19fdb7e7a0b531acf32edbac6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:02 compute-0 podman[82903]: 2025-12-09 12:03:02.134469821 +0000 UTC m=+0.106553537 container init 9be3c7a3513a58cb57a967d61fe852e6977c03d68395aa00adcc4d8a0c357943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:03:02 compute-0 podman[82903]: 2025-12-09 12:03:02.139469385 +0000 UTC m=+0.111553101 container start 9be3c7a3513a58cb57a967d61fe852e6977c03d68395aa00adcc4d8a0c357943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 09 12:03:02 compute-0 bash[82903]: 9be3c7a3513a58cb57a967d61fe852e6977c03d68395aa00adcc4d8a0c357943
Dec 09 12:03:02 compute-0 podman[82903]: 2025-12-09 12:03:02.051335393 +0000 UTC m=+0.023419139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:02 compute-0 systemd[1]: Started Ceph osd.1 for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:03:02 compute-0 ceph-osd[82922]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 12:03:02 compute-0 ceph-osd[82922]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec 09 12:03:02 compute-0 ceph-osd[82922]: pidfile_write: ignore empty --pid-file
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:02 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 09 12:03:02 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3464809463' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 09 12:03:02 compute-0 hardcore_noether[82821]: 
Dec 09 12:03:02 compute-0 hardcore_noether[82821]: {"fsid":"750b57e3-924f-51a5-ab09-01517535f732","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":95,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1765281772,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-09T12:01:24:354878+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-09T12:02:52.809145+0000","services":{}},"progress_events":{}}
Dec 09 12:03:02 compute-0 sudo[82283]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:02 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:03:02 compute-0 systemd[1]: libpod-c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373.scope: Deactivated successfully.
Dec 09 12:03:02 compute-0 podman[82806]: 2025-12-09 12:03:02.246334401 +0000 UTC m=+0.901184576 container died c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373 (image=quay.io/ceph/ceph:v19, name=hardcore_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:02 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:02 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:03:02 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-736c6a5dea499770406b97d1bdc7d4553dbc56c91a17d7ba279cd88c74baafdf-merged.mount: Deactivated successfully.
Dec 09 12:03:02 compute-0 podman[82806]: 2025-12-09 12:03:02.296930791 +0000 UTC m=+0.951780966 container remove c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373 (image=quay.io/ceph/ceph:v19, name=hardcore_noether, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:02 compute-0 systemd[1]: libpod-conmon-c4677e7b4f6967a759c3dd193a3f1e12660f91b350ba204b6a6378529f2b4373.scope: Deactivated successfully.
Dec 09 12:03:02 compute-0 sudo[82799]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:02 compute-0 sudo[82940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:03:02 compute-0 sudo[82940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:02 compute-0 sudo[82940]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:02 compute-0 sudo[82973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- raw list --format json
Dec 09 12:03:02 compute-0 sudo[82973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:02 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:02 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:02 compute-0 podman[83045]: 2025-12-09 12:03:02.817615152 +0000 UTC m=+0.044914874 container create a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 09 12:03:02 compute-0 systemd[1]: Started libpod-conmon-a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f.scope.
Dec 09 12:03:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:02 compute-0 podman[83045]: 2025-12-09 12:03:02.797462771 +0000 UTC m=+0.024762513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:02 compute-0 podman[83045]: 2025-12-09 12:03:02.910437577 +0000 UTC m=+0.137737299 container init a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_napier, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:03:02 compute-0 podman[83045]: 2025-12-09 12:03:02.918660736 +0000 UTC m=+0.145960458 container start a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_napier, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:02 compute-0 podman[83045]: 2025-12-09 12:03:02.921939044 +0000 UTC m=+0.149238786 container attach a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_napier, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:02 compute-0 hungry_napier[83063]: 167 167
Dec 09 12:03:02 compute-0 systemd[1]: libpod-a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f.scope: Deactivated successfully.
Dec 09 12:03:02 compute-0 conmon[83063]: conmon a26869bdea0f9b3091b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f.scope/container/memory.events
Dec 09 12:03:02 compute-0 podman[83045]: 2025-12-09 12:03:02.928033164 +0000 UTC m=+0.155332886 container died a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_napier, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 09 12:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-96f9b059d0a0cd96a18dcaea429ae940d11d7b0547340d0254ee6ec657a8aedb-merged.mount: Deactivated successfully.
Dec 09 12:03:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:02 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3464809463' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 09 12:03:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:02 compute-0 podman[83045]: 2025-12-09 12:03:02.968285635 +0000 UTC m=+0.195585357 container remove a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 09 12:03:02 compute-0 systemd[1]: libpod-conmon-a26869bdea0f9b3091b74de9182788e01a9fd9e59a4312ba3235e3dca165737f.scope: Deactivated successfully.
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:03 compute-0 podman[83087]: 2025-12-09 12:03:03.134117086 +0000 UTC m=+0.040292774 container create 0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 09 12:03:03 compute-0 systemd[1]: Started libpod-conmon-0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114.scope.
Dec 09 12:03:03 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba40487573da0850102fcace504e07c60888f73f354597dc2e9eeab9faaa0e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba40487573da0850102fcace504e07c60888f73f354597dc2e9eeab9faaa0e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba40487573da0850102fcace504e07c60888f73f354597dc2e9eeab9faaa0e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba40487573da0850102fcace504e07c60888f73f354597dc2e9eeab9faaa0e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:03 compute-0 podman[83087]: 2025-12-09 12:03:03.116128175 +0000 UTC m=+0.022303893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:03 compute-0 podman[83087]: 2025-12-09 12:03:03.378998719 +0000 UTC m=+0.285174417 container init 0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 09 12:03:03 compute-0 podman[83087]: 2025-12-09 12:03:03.385965828 +0000 UTC m=+0.292141516 container start 0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:03:03 compute-0 podman[83087]: 2025-12-09 12:03:03.38942213 +0000 UTC m=+0.295597848 container attach 0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_tesla, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336059800 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:03 compute-0 ceph-osd[82922]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 09 12:03:03 compute-0 ceph-osd[82922]: load: jerasure load: lrc 
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:03:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:03:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 12:03:03 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:03 compute-0 ceph-mon[74388]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:03 compute-0 lvm[83190]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:03:03 compute-0 lvm[83190]: VG ceph_vg0 finished
Dec 09 12:03:04 compute-0 hardcore_tesla[83106]: {}
Dec 09 12:03:04 compute-0 systemd[1]: libpod-0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114.scope: Deactivated successfully.
Dec 09 12:03:04 compute-0 systemd[1]: libpod-0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114.scope: Consumed 1.084s CPU time.
Dec 09 12:03:04 compute-0 podman[83087]: 2025-12-09 12:03:04.079055705 +0000 UTC m=+0.985231403 container died 0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_tesla, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ba40487573da0850102fcace504e07c60888f73f354597dc2e9eeab9faaa0e7-merged.mount: Deactivated successfully.
Dec 09 12:03:04 compute-0 podman[83087]: 2025-12-09 12:03:04.118373755 +0000 UTC m=+1.024549453 container remove 0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 09 12:03:04 compute-0 systemd[1]: libpod-conmon-0cf53cacb93bc3834680e810c1d66e214be1ee701c4f99f81c2dd7b2af017114.scope: Deactivated successfully.
Dec 09 12:03:04 compute-0 sudo[82973]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:03:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:03:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 09 12:03:04 compute-0 ceph-osd[82922]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:04 compute-0 sudo[83205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 12:03:04 compute-0 sudo[83205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:04 compute-0 sudo[83205]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 09 12:03:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 09 12:03:04 compute-0 sudo[83239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:03:04 compute-0 sudo[83239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:04 compute-0 sudo[83239]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:04 compute-0 sudo[83264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 09 12:03:04 compute-0 sudo[83264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount shared_bdev_used = 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: RocksDB version: 7.9.2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Git sha 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: DB SUMMARY
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: DB Session ID:  GTWEMRKXUCKVEX8X9LNA
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: CURRENT file:  CURRENT
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                         Options.error_if_exists: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.create_if_missing: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                                     Options.env: 0x557336ec5dc0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                                Options.info_log: 0x557336ec97a0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                              Options.statistics: (nil)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.use_fsync: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                              Options.db_log_dir: 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                                 Options.wal_dir: db.wal
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.write_buffer_manager: 0x557336fc0a00
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.unordered_write: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.row_cache: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                              Options.wal_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.two_write_queues: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.wal_compression: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.atomic_flush: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.max_background_jobs: 4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.max_background_compactions: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.max_subcompactions: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.max_open_files: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Compression algorithms supported:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kZSTD supported: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kXpressCompression supported: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kBZip2Compression supported: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kLZ4Compression supported: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kZlibCompression supported: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kSnappyCompression supported: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ee9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ee9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ee9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5523d466-fd56-4605-85c6-83a41403d143
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281784564220, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281784564472, "job": 1, "event": "recovery_finished"}
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: freelist init
Dec 09 12:03:04 compute-0 ceph-osd[82922]: freelist _read_cfg
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs umount
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) close
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bdev(0x557336ef5000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluefs mount shared_bdev_used = 4718592
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 09 12:03:04 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: RocksDB version: 7.9.2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Git sha 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: DB SUMMARY
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: DB Session ID:  GTWEMRKXUCKVEX8X9LNB
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: CURRENT file:  CURRENT
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: IDENTITY file:  IDENTITY
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                         Options.error_if_exists: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.create_if_missing: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                         Options.paranoid_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                                     Options.env: 0x5573370642a0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                                Options.info_log: 0x5573371c8760
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_file_opening_threads: 16
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                              Options.statistics: (nil)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.use_fsync: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.max_log_file_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                         Options.allow_fallocate: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.use_direct_reads: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.create_missing_column_families: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                              Options.db_log_dir: 
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                                 Options.wal_dir: db.wal
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.advise_random_on_open: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.write_buffer_manager: 0x557336fc0a00
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                            Options.rate_limiter: (nil)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.unordered_write: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.row_cache: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                              Options.wal_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.allow_ingest_behind: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.two_write_queues: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.manual_wal_flush: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.wal_compression: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.atomic_flush: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.log_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.allow_data_in_errors: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.db_host_id: __hostname__
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.max_background_jobs: 4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.max_background_compactions: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.max_subcompactions: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.max_open_files: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.bytes_per_sync: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.max_background_flushes: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Compression algorithms supported:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kZSTD supported: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kXpressCompression supported: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kBZip2Compression supported: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kLZ4Compression supported: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kZlibCompression supported: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kLZ4HCCompression supported: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         kSnappyCompression supported: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ef350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ee9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ee9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:           Options.merge_operator: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.compaction_filter_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.sst_partitioner_factory: None
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557336ec9ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5573360ee9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.write_buffer_size: 16777216
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.max_write_buffer_number: 64
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.compression: LZ4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.num_levels: 7
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.level: 32767
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.compression_opts.strategy: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                  Options.compression_opts.enabled: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.arena_block_size: 1048576
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.disable_auto_compactions: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.inplace_update_support: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.bloom_locality: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                    Options.max_successive_merges: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.paranoid_file_checks: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.force_consistency_checks: 1
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.report_bg_io_stats: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                               Options.ttl: 2592000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                       Options.enable_blob_files: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                           Options.min_blob_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                          Options.blob_file_size: 268435456
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb:                Options.blob_file_starting_level: 0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5523d466-fd56-4605-85c6-83a41403d143
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281784832179, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281784837936, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765281784, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5523d466-fd56-4605-85c6-83a41403d143", "db_session_id": "GTWEMRKXUCKVEX8X9LNB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281784841186, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765281784, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5523d466-fd56-4605-85c6-83a41403d143", "db_session_id": "GTWEMRKXUCKVEX8X9LNB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281784846938, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765281784, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5523d466-fd56-4605-85c6-83a41403d143", "db_session_id": "GTWEMRKXUCKVEX8X9LNB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765281784848668, "job": 1, "event": "recovery_finished"}
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5573370c6000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: DB pointer 0x557337070000
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 09 12:03:04 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 09 12:03:04 compute-0 ceph-osd[82922]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ee9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ee9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ee9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5573360ef350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 09 12:03:04 compute-0 ceph-osd[82922]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 09 12:03:04 compute-0 ceph-osd[82922]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 09 12:03:04 compute-0 ceph-osd[82922]: _get_class not permitted to load lua
Dec 09 12:03:04 compute-0 ceph-osd[82922]: _get_class not permitted to load sdk
Dec 09 12:03:04 compute-0 ceph-osd[82922]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 09 12:03:04 compute-0 ceph-osd[82922]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 09 12:03:04 compute-0 ceph-osd[82922]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 09 12:03:04 compute-0 ceph-osd[82922]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 09 12:03:04 compute-0 ceph-osd[82922]: osd.1 0 load_pgs
Dec 09 12:03:04 compute-0 ceph-osd[82922]: osd.1 0 load_pgs opened 0 pgs
Dec 09 12:03:04 compute-0 ceph-osd[82922]: osd.1 0 log_to_monitors true
Dec 09 12:03:04 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1[82918]: 2025-12-09T12:03:04.877+0000 7fc8f7c25740 -1 osd.1 0 log_to_monitors true
Dec 09 12:03:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 09 12:03:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 09 12:03:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 podman[83762]: 2025-12-09 12:03:05.021793722 +0000 UTC m=+0.052979068 container exec a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 09 12:03:05 compute-0 podman[83762]: 2025-12-09 12:03:05.111911299 +0000 UTC m=+0.143096625 container exec_died a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 ceph-mon[74388]: from='osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 09 12:03:05 compute-0 ceph-mon[74388]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:05 compute-0 ceph-mon[74388]: from='osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 09 12:03:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:05 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:03:05 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 sudo[83264]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:05 compute-0 sudo[83847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:03:05 compute-0 sudo[83847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:05 compute-0 sudo[83847]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:05 compute-0 sudo[83872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 09 12:03:05 compute-0 sudo[83872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:05 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 09 12:03:05 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 09 12:03:05 compute-0 sudo[83872]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:06 compute-0 sudo[83927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:03:06 compute-0 sudo[83927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:06 compute-0 sudo[83927]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:06 compute-0 sudo[83952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- inventory --format=json-pretty --filter-for-batch
Dec 09 12:03:06 compute-0 sudo[83952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec 09 12:03:06 compute-0 ceph-osd[82922]: osd.1 0 done with init, starting boot process
Dec 09 12:03:06 compute-0 ceph-osd[82922]: osd.1 0 start_boot
Dec 09 12:03:06 compute-0 ceph-osd[82922]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 09 12:03:06 compute-0 ceph-osd[82922]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 09 12:03:06 compute-0 ceph-osd[82922]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 09 12:03:06 compute-0 ceph-osd[82922]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 09 12:03:06 compute-0 ceph-osd[82922]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:06 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:03:06 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 09 12:03:06 compute-0 ceph-mon[74388]: osdmap e6: 2 total, 0 up, 2 in
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:06 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:06 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3815659079; not ready for session (expect reconnect)
Dec 09 12:03:06 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3119520585; not ready for session (expect reconnect)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:06 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 12:03:06 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:06 compute-0 podman[84018]: 2025-12-09 12:03:06.534543631 +0000 UTC m=+0.043561231 container create e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:06 compute-0 systemd[1]: Started libpod-conmon-e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254.scope.
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:06 compute-0 podman[84018]: 2025-12-09 12:03:06.516887521 +0000 UTC m=+0.025905121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:06 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:06 compute-0 podman[84018]: 2025-12-09 12:03:06.644481357 +0000 UTC m=+0.153498957 container init e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mayer, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 09 12:03:06 compute-0 podman[84018]: 2025-12-09 12:03:06.650869436 +0000 UTC m=+0.159887026 container start e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 12:03:06 compute-0 sleepy_mayer[84034]: 167 167
Dec 09 12:03:06 compute-0 systemd[1]: libpod-e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254.scope: Deactivated successfully.
Dec 09 12:03:06 compute-0 conmon[84034]: conmon e49c2cdd6d02b63e3d0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254.scope/container/memory.events
Dec 09 12:03:06 compute-0 podman[84018]: 2025-12-09 12:03:06.672803737 +0000 UTC m=+0.181821337 container attach e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mayer, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:06 compute-0 podman[84018]: 2025-12-09 12:03:06.673279891 +0000 UTC m=+0.182297481 container died e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 12:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-610445f24e1b644e6d113060ec63bf2d995984509ad4372c1693d865874ddbd6-merged.mount: Deactivated successfully.
Dec 09 12:03:06 compute-0 podman[84018]: 2025-12-09 12:03:06.780852951 +0000 UTC m=+0.289870551 container remove e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:03:06 compute-0 systemd[1]: libpod-conmon-e49c2cdd6d02b63e3d0a14744de415a932b17dec2f17c2657ab4ae4686173254.scope: Deactivated successfully.
Dec 09 12:03:06 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:03:06 compute-0 podman[84060]: 2025-12-09 12:03:06.944990955 +0000 UTC m=+0.052711840 container create d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:03:06 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:03:07 compute-0 systemd[1]: Started libpod-conmon-d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30.scope.
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:03:07 compute-0 podman[84060]: 2025-12-09 12:03:06.917035058 +0000 UTC m=+0.024755963 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e2c960017672fac872cc811c218d2ec19432d961bf5900a91710d03bef01c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e2c960017672fac872cc811c218d2ec19432d961bf5900a91710d03bef01c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e2c960017672fac872cc811c218d2ec19432d961bf5900a91710d03bef01c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 09 12:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e2c960017672fac872cc811c218d2ec19432d961bf5900a91710d03bef01c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 09 12:03:07 compute-0 podman[84060]: 2025-12-09 12:03:07.05945057 +0000 UTC m=+0.167171475 container init d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 podman[84060]: 2025-12-09 12:03:07.066407439 +0000 UTC m=+0.174128324 container start d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hawking, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 09 12:03:07 compute-0 podman[84060]: 2025-12-09 12:03:07.091483242 +0000 UTC m=+0.199204117 container attach d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hawking, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3815659079; not ready for session (expect reconnect)
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3119520585; not ready for session (expect reconnect)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 09 12:03:07 compute-0 ceph-mon[74388]: osdmap e7: 2 total, 0 up, 2 in
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 09 12:03:07 compute-0 ceph-mon[74388]: Adjusting osd_memory_target on compute-1 to  5247M
Dec 09 12:03:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:03:07 compute-0 interesting_hawking[84076]: [
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:     {
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "available": false,
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "being_replaced": false,
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "ceph_device_lvm": false,
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "lsm_data": {},
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "lvs": [],
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "path": "/dev/sr0",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "rejected_reasons": [
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "Insufficient space (<5GB)",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "Has a FileSystem"
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         ],
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         "sys_api": {
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "actuators": null,
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "device_nodes": [
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:                 "sr0"
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             ],
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "devname": "sr0",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "human_readable_size": "482.00 KB",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "id_bus": "ata",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "model": "QEMU DVD-ROM",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "nr_requests": "2",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "parent": "/dev/sr0",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "partitions": {},
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "path": "/dev/sr0",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "removable": "1",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "rev": "2.5+",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "ro": "0",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "rotational": "1",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "sas_address": "",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "sas_device_handle": "",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "scheduler_mode": "mq-deadline",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "sectors": 0,
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "sectorsize": "2048",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "size": 493568.0,
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "support_discard": "2048",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "type": "disk",
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:             "vendor": "QEMU"
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:         }
Dec 09 12:03:07 compute-0 interesting_hawking[84076]:     }
Dec 09 12:03:07 compute-0 interesting_hawking[84076]: ]
Dec 09 12:03:07 compute-0 systemd[1]: libpod-d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30.scope: Deactivated successfully.
Dec 09 12:03:07 compute-0 podman[84060]: 2025-12-09 12:03:07.746451259 +0000 UTC m=+0.854172144 container died d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 09 12:03:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-14e2c960017672fac872cc811c218d2ec19432d961bf5900a91710d03bef01c5-merged.mount: Deactivated successfully.
Dec 09 12:03:07 compute-0 podman[84060]: 2025-12-09 12:03:07.839507112 +0000 UTC m=+0.947227997 container remove d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 09 12:03:07 compute-0 systemd[1]: libpod-conmon-d591380852e6790a061afa7e7380212d4a02313d04f73b95b8683d15a15bdb30.scope: Deactivated successfully.
Dec 09 12:03:07 compute-0 sudo[83952]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 09 12:03:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec 09 12:03:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Dec 09 12:03:07 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Dec 09 12:03:08 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3815659079; not ready for session (expect reconnect)
Dec 09 12:03:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:03:08 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:08 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 12:03:08 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3119520585; not ready for session (expect reconnect)
Dec 09 12:03:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:03:08 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:08 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:03:08 compute-0 ceph-mon[74388]: purged_snaps scrub starts
Dec 09 12:03:08 compute-0 ceph-mon[74388]: purged_snaps scrub ok
Dec 09 12:03:08 compute-0 ceph-mon[74388]: purged_snaps scrub starts
Dec 09 12:03:08 compute-0 ceph-mon[74388]: purged_snaps scrub ok
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 09 12:03:08 compute-0 ceph-mon[74388]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 09 12:03:08 compute-0 ceph-mon[74388]: Unable to set osd_memory_target on compute-0 to 134200524: error parsing value: Value '134200524' is below minimum 939524096
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:08 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:08 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:09 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3815659079; not ready for session (expect reconnect)
Dec 09 12:03:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:03:09 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:09 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 12:03:09 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3119520585; not ready for session (expect reconnect)
Dec 09 12:03:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:03:09 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:09 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:03:09 compute-0 ceph-mon[74388]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:09 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:09 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.290 iops: 7754.362 elapsed_sec: 0.387
Dec 09 12:03:09 compute-0 ceph-osd[82922]: log_channel(cluster) log [WRN] : OSD bench result of 7754.361854 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 0 waiting for initial osdmap
Dec 09 12:03:09 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1[82918]: 2025-12-09T12:03:09.542+0000 7fc8f43bb640 -1 osd.1 0 waiting for initial osdmap
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 7 check_osdmap_features require_osd_release unknown -> squid
Dec 09 12:03:09 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-osd-1[82918]: 2025-12-09T12:03:09.565+0000 7fc8ef1d0640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 7 set_numa_affinity not setting numa affinity
Dec 09 12:03:09 compute-0 ceph-osd[82922]: osd.1 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 09 12:03:10 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3815659079; not ready for session (expect reconnect)
Dec 09 12:03:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:03:10 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:10 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 09 12:03:10 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3119520585; not ready for session (expect reconnect)
Dec 09 12:03:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:03:10 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:10 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 09 12:03:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 09 12:03:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:03:10 compute-0 ceph-mon[74388]: OSD bench result of 7754.361854 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 12:03:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e8 e8: 2 total, 2 up, 2 in
Dec 09 12:03:10 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079] boot
Dec 09 12:03:10 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585] boot
Dec 09 12:03:10 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 2 up, 2 in
Dec 09 12:03:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:03:10 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:03:10 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:10 compute-0 ceph-osd[82922]: osd.1 8 state: booting -> active
Dec 09 12:03:10 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 09 12:03:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:03:11 compute-0 ceph-mon[74388]: OSD bench result of 8089.012180 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 12:03:11 compute-0 ceph-mon[74388]: osd.1 [v2:192.168.122.100:6802/3815659079,v1:192.168.122.100:6803/3815659079] boot
Dec 09 12:03:11 compute-0 ceph-mon[74388]: osd.0 [v2:192.168.122.101:6800/3119520585,v1:192.168.122.101:6801/3119520585] boot
Dec 09 12:03:11 compute-0 ceph-mon[74388]: osdmap e8: 2 total, 2 up, 2 in
Dec 09 12:03:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:03:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:03:11 compute-0 ceph-mon[74388]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 09 12:03:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Dec 09 12:03:11 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Dec 09 12:03:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:11 compute-0 ceph-mgr[74679]: [devicehealth INFO root] creating mgr pool
Dec 09 12:03:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 09 12:03:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 09 12:03:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 09 12:03:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 09 12:03:12 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 09 12:03:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Dec 09 12:03:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec 09 12:03:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 09 12:03:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 09 12:03:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 09 12:03:12 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Dec 09 12:03:12 compute-0 ceph-osd[82922]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 09 12:03:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 09 12:03:12 compute-0 ceph-osd[82922]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 09 12:03:12 compute-0 ceph-osd[82922]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 09 12:03:12 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 09 12:03:12 compute-0 ceph-mon[74388]: osdmap e9: 2 total, 2 up, 2 in
Dec 09 12:03:12 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 09 12:03:12 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 09 12:03:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 09 12:03:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 09 12:03:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Dec 09 12:03:13 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec 09 12:03:13 compute-0 ceph-mon[74388]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:03:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 09 12:03:13 compute-0 ceph-mon[74388]: osdmap e10: 2 total, 2 up, 2 in
Dec 09 12:03:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 09 12:03:13 compute-0 ceph-mon[74388]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 09 12:03:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 09 12:03:13 compute-0 ceph-mon[74388]: osdmap e11: 2 total, 2 up, 2 in
Dec 09 12:03:13 compute-0 ceph-mgr[74679]: [devicehealth INFO root] creating main.db for devicehealth
Dec 09 12:03:13 compute-0 ceph-mgr[74679]: [devicehealth INFO root] Check health
Dec 09 12:03:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 12:03:13 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:03:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 09 12:03:13 compute-0 sudo[85144]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 09 12:03:13 compute-0 sudo[85144]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 12:03:13 compute-0 sudo[85144]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 09 12:03:13 compute-0 sudo[85144]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 09 12:03:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 09 12:03:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec 09 12:03:14 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec 09 12:03:14 compute-0 ceph-mon[74388]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:03:14 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:03:14 compute-0 ceph-mon[74388]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 09 12:03:14 compute-0 ceph-mon[74388]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 09 12:03:14 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 09 12:03:15 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.wfxreg(active, since 87s)
Dec 09 12:03:15 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 09 12:03:15 compute-0 ceph-mon[74388]: osdmap e12: 2 total, 2 up, 2 in
Dec 09 12:03:15 compute-0 ceph-mon[74388]: pgmap v47: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 09 12:03:15 compute-0 ceph-mon[74388]: mgrmap e8: compute-0.wfxreg(active, since 87s)
Dec 09 12:03:16 compute-0 ceph-mon[74388]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 09 12:03:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:16 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:17 compute-0 ceph-mon[74388]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:18 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:19 compute-0 ceph-mon[74388]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:03:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:03:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:03:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:03:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:03:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:03:20 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:21 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:21 compute-0 sshd-session[85147]: Received disconnect from 80.94.93.119 port 59364:11:  [preauth]
Dec 09 12:03:21 compute-0 sshd-session[85147]: Disconnected from authenticating user root 80.94.93.119 port 59364 [preauth]
Dec 09 12:03:21 compute-0 ceph-mon[74388]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:22 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:23 compute-0 ceph-mon[74388]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:24 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:03:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:03:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 09 12:03:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 09 12:03:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:25 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:03:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:03:25 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:03:25 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:03:25 compute-0 ceph-mon[74388]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:25 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:25 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:25 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:25 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:25 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 09 12:03:25 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:25 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:03:26 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:03:26 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:03:26 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:26 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:27 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:03:27 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:03:27 compute-0 ceph-mon[74388]: Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:03:27 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:03:27 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:03:28 compute-0 ceph-mon[74388]: Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:03:28 compute-0 ceph-mon[74388]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:28 compute-0 ceph-mon[74388]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:03:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:03:28 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:28 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:03:28 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:28 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:28 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 577edab8-f5e5-4198-b555-c14aa64570ec (Updating mon deployment (+2 -> 3))
Dec 09 12:03:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 09 12:03:28 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 09 12:03:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 09 12:03:28 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 09 12:03:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:28 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:28 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec 09 12:03:28 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec 09 12:03:29 compute-0 ceph-mon[74388]: Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:03:29 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:29 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:29 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:29 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 09 12:03:29 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 09 12:03:29 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:29 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 09 12:03:29 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 09 12:03:30 compute-0 ceph-mon[74388]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:30 compute-0 ceph-mon[74388]: Deploying daemon mon.compute-2 on compute-2
Dec 09 12:03:30 compute-0 ceph-mon[74388]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 09 12:03:30 compute-0 ceph-mon[74388]: Cluster is now healthy
Dec 09 12:03:30 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:31 compute-0 ceph-mon[74388]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec 09 12:03:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec 09 12:03:31 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/310000671; not ready for session (expect reconnect)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:31 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:31 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 09 12:03:31 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 09 12:03:31 compute-0 ceph-mon[74388]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec 09 12:03:31 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:03:32 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:32 compute-0 sudo[85172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltcpujxusavyjkhgtbigegefitkmrfwl ; /usr/bin/python3'
Dec 09 12:03:32 compute-0 sudo[85172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:32 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/310000671; not ready for session (expect reconnect)
Dec 09 12:03:32 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 09 12:03:32 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:03:32 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:32 compute-0 python3[85174]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:32 compute-0 podman[85176]: 2025-12-09 12:03:32.701200029 +0000 UTC m=+0.048025276 container create 726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d (image=quay.io/ceph/ceph:v19, name=brave_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 12:03:32 compute-0 systemd[1]: Started libpod-conmon-726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d.scope.
Dec 09 12:03:32 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c060a8e97ffb442d64f0b1606770d56287aa3bc68a2659eef19faf040c2df808/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c060a8e97ffb442d64f0b1606770d56287aa3bc68a2659eef19faf040c2df808/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c060a8e97ffb442d64f0b1606770d56287aa3bc68a2659eef19faf040c2df808/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:32 compute-0 podman[85176]: 2025-12-09 12:03:32.676383585 +0000 UTC m=+0.023208882 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:32 compute-0 podman[85176]: 2025-12-09 12:03:32.78350467 +0000 UTC m=+0.130329937 container init 726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d (image=quay.io/ceph/ceph:v19, name=brave_euler, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 12:03:32 compute-0 podman[85176]: 2025-12-09 12:03:32.790910453 +0000 UTC m=+0.137735700 container start 726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d (image=quay.io/ceph/ceph:v19, name=brave_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 09 12:03:32 compute-0 podman[85176]: 2025-12-09 12:03:32.7941798 +0000 UTC m=+0.141005047 container attach 726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d (image=quay.io/ceph/ceph:v19, name=brave_euler, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:32 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 09 12:03:33 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 09 12:03:33 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:03:33 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 09 12:03:33 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 09 12:03:33 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 09 12:03:33 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:33 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:33 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:33 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 09 12:03:33 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 09 12:03:33 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/310000671; not ready for session (expect reconnect)
Dec 09 12:03:33 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:03:33 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:33 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 09 12:03:34 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 09 12:03:34 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:34 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:34 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:34 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:34 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 09 12:03:34 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/310000671; not ready for session (expect reconnect)
Dec 09 12:03:34 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:03:34 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:34 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 09 12:03:35 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 09 12:03:35 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:35 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:35 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:35 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 09 12:03:35 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/310000671; not ready for session (expect reconnect)
Dec 09 12:03:35 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:03:35 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:35 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 09 12:03:35 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/310000671; not ready for session (expect reconnect)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 09 12:03:36 compute-0 ceph-mon[74388]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : last_changed 2025-12-09T12:03:31.588791+0000
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : created 2025-12-09T12:01:22.103720+0000
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap 
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.wfxreg(active, since 108s)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 577edab8-f5e5-4198-b555-c14aa64570ec (Updating mon deployment (+2 -> 3))
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 577edab8-f5e5-4198-b555-c14aa64570ec (Updating mon deployment (+2 -> 3)) in 8 seconds
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev c7962261-0baf-4eb6-9863-0c56ebb0c229 (Updating mgr deployment (+2 -> 3))
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.hvlbot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.hvlbot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.hvlbot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.hvlbot on compute-2
Dec 09 12:03:36 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.hvlbot on compute-2
Dec 09 12:03:36 compute-0 ceph-mon[74388]: Deploying daemon mon.compute-1 on compute-1
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0 calling monitor election
Dec 09 12:03:36 compute-0 ceph-mon[74388]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-2 calling monitor election
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: monmap epoch 2
Dec 09 12:03:36 compute-0 ceph-mon[74388]: fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:03:36 compute-0 ceph-mon[74388]: last_changed 2025-12-09T12:03:31.588791+0000
Dec 09 12:03:36 compute-0 ceph-mon[74388]: created 2025-12-09T12:01:22.103720+0000
Dec 09 12:03:36 compute-0 ceph-mon[74388]: min_mon_release 19 (squid)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: election_strategy: 1
Dec 09 12:03:36 compute-0 ceph-mon[74388]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 12:03:36 compute-0 ceph-mon[74388]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 09 12:03:36 compute-0 ceph-mon[74388]: fsmap 
Dec 09 12:03:36 compute-0 ceph-mon[74388]: osdmap e12: 2 total, 2 up, 2 in
Dec 09 12:03:36 compute-0 ceph-mon[74388]: mgrmap e8: compute-0.wfxreg(active, since 108s)
Dec 09 12:03:36 compute-0 ceph-mon[74388]: overall HEALTH_OK
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.hvlbot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.hvlbot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 09 12:03:36 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:37 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 09 12:03:37 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec 09 12:03:37 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:37 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:37 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:37 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 09 12:03:37 compute-0 ceph-mon[74388]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 12:03:37 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:03:37 compute-0 ceph-mon[74388]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:37 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:37 compute-0 ceph-mon[74388]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:03:37 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:37 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 09 12:03:37 compute-0 ceph-mon[74388]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec 09 12:03:37 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 09 12:03:37 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:03:37 compute-0 ceph-mgr[74679]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec 09 12:03:37 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:03:37.592+0000 7ff25f681640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec 09 12:03:37 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 09 12:03:37 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966109174' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 09 12:03:37 compute-0 brave_euler[85193]: 
Dec 09 12:03:37 compute-0 brave_euler[85193]: {"fsid":"750b57e3-924f-51a5-ab09-01517535f732","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":11,"quorum":[],"quorum_names":[],"quorum_age":2722,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1765281790,"num_in_osds":2,"osd_in_since":1765281772,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55771136,"bytes_avail":42885513216,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-12-09T12:01:24:354878+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-09T12:02:52.809145+0000","services":{}},"progress_events":{"577edab8-f5e5-4198-b555-c14aa64570ec":{"message":"Updating mon deployment (+2 -> 3) (3s)\n      [==============..............] (remaining: 3s)","progress":0.5,"add_to_ceph_s":true}}}
Dec 09 12:03:37 compute-0 systemd[1]: libpod-726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d.scope: Deactivated successfully.
Dec 09 12:03:37 compute-0 podman[85176]: 2025-12-09 12:03:37.646579406 +0000 UTC m=+4.993404673 container died 726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d (image=quay.io/ceph/ceph:v19, name=brave_euler, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Dec 09 12:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c060a8e97ffb442d64f0b1606770d56287aa3bc68a2659eef19faf040c2df808-merged.mount: Deactivated successfully.
Dec 09 12:03:37 compute-0 podman[85176]: 2025-12-09 12:03:37.681782782 +0000 UTC m=+5.028608029 container remove 726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d (image=quay.io/ceph/ceph:v19, name=brave_euler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:03:37 compute-0 systemd[1]: libpod-conmon-726b3fc19337860140e66ce84ba4cc994ada90727ba6f8c838df256a60f77b4d.scope: Deactivated successfully.
Dec 09 12:03:37 compute-0 sudo[85172]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:38 compute-0 sudo[85252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnvjwgbevnqewtedwkidshlvjncpzctl ; /usr/bin/python3'
Dec 09 12:03:38 compute-0 sudo[85252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:38 compute-0 python3[85254]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:38 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:38 compute-0 podman[85255]: 2025-12-09 12:03:38.323267711 +0000 UTC m=+0.048267537 container create 093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e (image=quay.io/ceph/ceph:v19, name=gifted_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 09 12:03:38 compute-0 systemd[1]: Started libpod-conmon-093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e.scope.
Dec 09 12:03:38 compute-0 podman[85255]: 2025-12-09 12:03:38.302889091 +0000 UTC m=+0.027888947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0dfdc49d45a815879a9d59d55ecd4cc0e75b8f2c442b21104cad0bedb87935/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0dfdc49d45a815879a9d59d55ecd4cc0e75b8f2c442b21104cad0bedb87935/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:38 compute-0 podman[85255]: 2025-12-09 12:03:38.434523765 +0000 UTC m=+0.159523611 container init 093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e (image=quay.io/ceph/ceph:v19, name=gifted_satoshi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:03:38 compute-0 podman[85255]: 2025-12-09 12:03:38.440576714 +0000 UTC m=+0.165576540 container start 093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e (image=quay.io/ceph/ceph:v19, name=gifted_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 09 12:03:38 compute-0 podman[85255]: 2025-12-09 12:03:38.443416617 +0000 UTC m=+0.168416463 container attach 093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e (image=quay.io/ceph/ceph:v19, name=gifted_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:38 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:38 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:38 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:38 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 09 12:03:38 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:38 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:03:38 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:38 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:38 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:39 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:39 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:39 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:39 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:39 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:39 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 09 12:03:39 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:40 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:40 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 3 completed events
Dec 09 12:03:40 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:03:40 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:40 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:40 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:40 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:40 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 09 12:03:41 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:41 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:41 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:41 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 09 12:03:41 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:41 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:41 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:41 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 09 12:03:42 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:42 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 09 12:03:42 compute-0 ceph-mon[74388]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : last_changed 2025-12-09T12:03:37.477775+0000
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : created 2025-12-09T12:01:22.103720+0000
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.wfxreg(active, since 114s)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.lorvly", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.lorvly", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.lorvly", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.lorvly on compute-1
Dec 09 12:03:42 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.lorvly on compute-1
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0 calling monitor election
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-2 calling monitor election
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2966109174' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-1 calling monitor election
Dec 09 12:03:42 compute-0 ceph-mon[74388]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: monmap epoch 3
Dec 09 12:03:42 compute-0 ceph-mon[74388]: fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:03:42 compute-0 ceph-mon[74388]: last_changed 2025-12-09T12:03:37.477775+0000
Dec 09 12:03:42 compute-0 ceph-mon[74388]: created 2025-12-09T12:01:22.103720+0000
Dec 09 12:03:42 compute-0 ceph-mon[74388]: min_mon_release 19 (squid)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: election_strategy: 1
Dec 09 12:03:42 compute-0 ceph-mon[74388]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 09 12:03:42 compute-0 ceph-mon[74388]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 09 12:03:42 compute-0 ceph-mon[74388]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 09 12:03:42 compute-0 ceph-mon[74388]: fsmap 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: osdmap e12: 2 total, 2 up, 2 in
Dec 09 12:03:42 compute-0 ceph-mon[74388]: mgrmap e8: compute-0.wfxreg(active, since 114s)
Dec 09 12:03:42 compute-0 ceph-mon[74388]: overall HEALTH_OK
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:42 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.lorvly", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 09 12:03:43 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 12:03:43 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/518835598' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:43 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4031641521; not ready for session (expect reconnect)
Dec 09 12:03:43 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:03:43 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:43 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 09 12:03:43 compute-0 ceph-mon[74388]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:43 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.lorvly", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 09 12:03:43 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 09 12:03:43 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:43 compute-0 ceph-mon[74388]: Deploying daemon mgr.compute-1.lorvly on compute-1
Dec 09 12:03:43 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/518835598' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:43 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:03:43 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/518835598' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:43 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec 09 12:03:43 compute-0 gifted_satoshi[85270]: pool 'vms' created
Dec 09 12:03:43 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec 09 12:03:43 compute-0 systemd[1]: libpod-093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e.scope: Deactivated successfully.
Dec 09 12:03:43 compute-0 podman[85255]: 2025-12-09 12:03:43.718950857 +0000 UTC m=+5.443950693 container died 093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e (image=quay.io/ceph/ceph:v19, name=gifted_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 12:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b0dfdc49d45a815879a9d59d55ecd4cc0e75b8f2c442b21104cad0bedb87935-merged.mount: Deactivated successfully.
Dec 09 12:03:43 compute-0 podman[85255]: 2025-12-09 12:03:43.768908317 +0000 UTC m=+5.493908143 container remove 093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e (image=quay.io/ceph/ceph:v19, name=gifted_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:43 compute-0 systemd[1]: libpod-conmon-093bf3a784388b8e54c6f6d770d87a4b1e7a3f298bb4c21a5a59912e98c6d35e.scope: Deactivated successfully.
Dec 09 12:03:43 compute-0 sudo[85252]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:43 compute-0 sudo[85332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvjqdhjwxofzywnhilzmwisrjiapvmmu ; /usr/bin/python3'
Dec 09 12:03:43 compute-0 sudo[85332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:44 compute-0 python3[85334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:44 compute-0 podman[85335]: 2025-12-09 12:03:44.162172413 +0000 UTC m=+0.046376684 container create 353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c (image=quay.io/ceph/ceph:v19, name=elastic_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 09 12:03:44 compute-0 systemd[1]: Started libpod-conmon-353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c.scope.
Dec 09 12:03:44 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c59b4b6e51c675c5b857d8b60a2c14adad08b423e04510561f513241713b6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c59b4b6e51c675c5b857d8b60a2c14adad08b423e04510561f513241713b6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:44 compute-0 podman[85335]: 2025-12-09 12:03:44.220970464 +0000 UTC m=+0.105174745 container init 353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c (image=quay.io/ceph/ceph:v19, name=elastic_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 12:03:44 compute-0 podman[85335]: 2025-12-09 12:03:44.226666371 +0000 UTC m=+0.110870672 container start 353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c (image=quay.io/ceph/ceph:v19, name=elastic_bohr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 12:03:44 compute-0 podman[85335]: 2025-12-09 12:03:44.230919711 +0000 UTC m=+0.115123982 container attach 353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c (image=quay.io/ceph/ceph:v19, name=elastic_bohr, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 09 12:03:44 compute-0 podman[85335]: 2025-12-09 12:03:44.139922612 +0000 UTC m=+0.024126913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:44 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:03:44 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:44 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:03:44 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:44 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 12:03:44 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:44 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev c7962261-0baf-4eb6-9863-0c56ebb0c229 (Updating mgr deployment (+2 -> 3))
Dec 09 12:03:44 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event c7962261-0baf-4eb6-9863-0c56ebb0c229 (Updating mgr deployment (+2 -> 3)) in 8 seconds
Dec 09 12:03:44 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 09 12:03:44 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:44 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev c2e3581d-aef8-40ab-9437-2bf893f57c40 (Updating crash deployment (+1 -> 3))
Dec 09 12:03:44 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 09 12:03:44 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 09 12:03:44 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 09 12:03:44 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:44 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:44 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec 09 12:03:44 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec 09 12:03:44 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v63: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:44 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 12:03:44 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4117379622' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/518835598' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:44 compute-0 ceph-mon[74388]: osdmap e13: 2 total, 2 up, 2 in
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:44 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4117379622' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:45 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 09 12:03:45 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4117379622' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:45 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec 09 12:03:45 compute-0 elastic_bohr[85350]: pool 'volumes' created
Dec 09 12:03:45 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec 09 12:03:45 compute-0 systemd[1]: libpod-353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c.scope: Deactivated successfully.
Dec 09 12:03:45 compute-0 podman[85335]: 2025-12-09 12:03:45.291168674 +0000 UTC m=+1.175373025 container died 353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c (image=quay.io/ceph/ceph:v19, name=elastic_bohr, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 09 12:03:45 compute-0 ceph-mon[74388]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:03:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-97c59b4b6e51c675c5b857d8b60a2c14adad08b423e04510561f513241713b6b-merged.mount: Deactivated successfully.
Dec 09 12:03:45 compute-0 podman[85335]: 2025-12-09 12:03:45.334824398 +0000 UTC m=+1.219028689 container remove 353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c (image=quay.io/ceph/ceph:v19, name=elastic_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 09 12:03:45 compute-0 systemd[1]: libpod-conmon-353c5a1624ab70b1afb01cd6636c86a8525cb7887db0099202c9fca0f78afb5c.scope: Deactivated successfully.
Dec 09 12:03:45 compute-0 sudo[85332]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:45 compute-0 sudo[85411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lobfnyykegvhgayvlvlnzzlecyegkvqo ; /usr/bin/python3'
Dec 09 12:03:45 compute-0 sudo[85411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:45 compute-0 python3[85413]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:45 compute-0 podman[85414]: 2025-12-09 12:03:45.702350399 +0000 UTC m=+0.039838959 container create 06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9 (image=quay.io/ceph/ceph:v19, name=magical_bouman, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:45 compute-0 systemd[1]: Started libpod-conmon-06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9.scope.
Dec 09 12:03:45 compute-0 ceph-mon[74388]: Deploying daemon crash.compute-2 on compute-2
Dec 09 12:03:45 compute-0 ceph-mon[74388]: pgmap v63: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:45 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4117379622' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:45 compute-0 ceph-mon[74388]: osdmap e14: 2 total, 2 up, 2 in
Dec 09 12:03:45 compute-0 ceph-mon[74388]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:03:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5f5f789f47cfd5d2099d28718840bbf6b8c6763f9b8cf0befd43afad2895db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5f5f789f47cfd5d2099d28718840bbf6b8c6763f9b8cf0befd43afad2895db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:45 compute-0 podman[85414]: 2025-12-09 12:03:45.685160954 +0000 UTC m=+0.022649544 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:45 compute-0 podman[85414]: 2025-12-09 12:03:45.785582203 +0000 UTC m=+0.123070803 container init 06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9 (image=quay.io/ceph/ceph:v19, name=magical_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 12:03:45 compute-0 podman[85414]: 2025-12-09 12:03:45.792752318 +0000 UTC m=+0.130240888 container start 06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9 (image=quay.io/ceph/ceph:v19, name=magical_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 09 12:03:45 compute-0 podman[85414]: 2025-12-09 12:03:45.79614102 +0000 UTC m=+0.133629610 container attach 06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9 (image=quay.io/ceph/ceph:v19, name=magical_bouman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 12:03:45 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:46 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev c2e3581d-aef8-40ab-9437-2bf893f57c40 (Updating crash deployment (+1 -> 3))
Dec 09 12:03:46 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event c2e3581d-aef8-40ab-9437-2bf893f57c40 (Updating crash deployment (+1 -> 3)) in 2 seconds
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/262904119' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:46 compute-0 sudo[85452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:03:46 compute-0 sudo[85452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:46 compute-0 sudo[85452]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/262904119' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec 09 12:03:46 compute-0 magical_bouman[85429]: pool 'backups' created
Dec 09 12:03:46 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec 09 12:03:46 compute-0 sudo[85480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 09 12:03:46 compute-0 sudo[85480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:46 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:46 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:46 compute-0 systemd[1]: libpod-06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9.scope: Deactivated successfully.
Dec 09 12:03:46 compute-0 podman[85414]: 2025-12-09 12:03:46.31160619 +0000 UTC m=+0.649094760 container died 06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9 (image=quay.io/ceph/ceph:v19, name=magical_bouman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 12:03:46 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v66: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e5f5f789f47cfd5d2099d28718840bbf6b8c6763f9b8cf0befd43afad2895db-merged.mount: Deactivated successfully.
Dec 09 12:03:46 compute-0 podman[85414]: 2025-12-09 12:03:46.347219769 +0000 UTC m=+0.684708339 container remove 06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9 (image=quay.io/ceph/ceph:v19, name=magical_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:46 compute-0 systemd[1]: libpod-conmon-06cd63662117db8060fe1e647e205e75c92f5869788cefdb33185698aae811f9.scope: Deactivated successfully.
Dec 09 12:03:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:46 compute-0 sudo[85411]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:46 compute-0 sudo[85547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvupzgeniqwpcwmqoghrntbuqjsacpgu ; /usr/bin/python3'
Dec 09 12:03:46 compute-0 sudo[85547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:46 compute-0 python3[85554]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:46 compute-0 podman[85583]: 2025-12-09 12:03:46.716619882 +0000 UTC m=+0.047834423 container create a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:46 compute-0 podman[85594]: 2025-12-09 12:03:46.742935686 +0000 UTC m=+0.042602230 container create c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a (image=quay.io/ceph/ceph:v19, name=zen_jackson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:46 compute-0 systemd[1]: Started libpod-conmon-a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24.scope.
Dec 09 12:03:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:46 compute-0 systemd[1]: Started libpod-conmon-c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a.scope.
Dec 09 12:03:46 compute-0 podman[85583]: 2025-12-09 12:03:46.792333868 +0000 UTC m=+0.123548439 container init a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Dec 09 12:03:46 compute-0 podman[85583]: 2025-12-09 12:03:46.698332031 +0000 UTC m=+0.029546572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:46 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d92368aece4c2697429de3eb48095b10fa47c52ecede8592477072e7c4dd25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d92368aece4c2697429de3eb48095b10fa47c52ecede8592477072e7c4dd25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:46 compute-0 podman[85583]: 2025-12-09 12:03:46.800385323 +0000 UTC m=+0.131599844 container start a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 09 12:03:46 compute-0 systemd[1]: libpod-a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24.scope: Deactivated successfully.
Dec 09 12:03:46 compute-0 podman[85583]: 2025-12-09 12:03:46.804800487 +0000 UTC m=+0.136015048 container attach a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:03:46 compute-0 priceless_pasteur[85613]: 167 167
Dec 09 12:03:46 compute-0 conmon[85613]: conmon a80778f182d103af14be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24.scope/container/memory.events
Dec 09 12:03:46 compute-0 podman[85583]: 2025-12-09 12:03:46.808961964 +0000 UTC m=+0.140176485 container died a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:46 compute-0 podman[85594]: 2025-12-09 12:03:46.809095019 +0000 UTC m=+0.108761593 container init c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a (image=quay.io/ceph/ceph:v19, name=zen_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:46 compute-0 podman[85594]: 2025-12-09 12:03:46.816783771 +0000 UTC m=+0.116450345 container start c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a (image=quay.io/ceph/ceph:v19, name=zen_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:46 compute-0 podman[85594]: 2025-12-09 12:03:46.722886968 +0000 UTC m=+0.022553542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:46 compute-0 podman[85594]: 2025-12-09 12:03:46.820020158 +0000 UTC m=+0.119686722 container attach c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a (image=quay.io/ceph/ceph:v19, name=zen_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 09 12:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9feb53656f7999fe50fefe93fb4eb943e58afba186e342f8eac4e8c4c2eb290-merged.mount: Deactivated successfully.
Dec 09 12:03:46 compute-0 podman[85583]: 2025-12-09 12:03:46.843150918 +0000 UTC m=+0.174365429 container remove a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec 09 12:03:46 compute-0 systemd[1]: libpod-conmon-a80778f182d103af14beff7744adf0e558a184cce833093dcefb273235f8ea24.scope: Deactivated successfully.
Dec 09 12:03:46 compute-0 podman[85654]: 2025-12-09 12:03:46.984720957 +0000 UTC m=+0.036543802 container create b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:03:47 compute-0 systemd[1]: Started libpod-conmon-b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a.scope.
Dec 09 12:03:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a320d7e132195679a7eef37898e50193db2d9aa9c391f02377b5fe1d86627894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a320d7e132195679a7eef37898e50193db2d9aa9c391f02377b5fe1d86627894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a320d7e132195679a7eef37898e50193db2d9aa9c391f02377b5fe1d86627894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a320d7e132195679a7eef37898e50193db2d9aa9c391f02377b5fe1d86627894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a320d7e132195679a7eef37898e50193db2d9aa9c391f02377b5fe1d86627894/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:47 compute-0 podman[85654]: 2025-12-09 12:03:46.968272737 +0000 UTC m=+0.020095612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:47 compute-0 podman[85654]: 2025-12-09 12:03:47.08287097 +0000 UTC m=+0.134693815 container init b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:47 compute-0 podman[85654]: 2025-12-09 12:03:47.089332863 +0000 UTC m=+0.141155708 container start b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kalam, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:47 compute-0 podman[85654]: 2025-12-09 12:03:47.092417514 +0000 UTC m=+0.144240379 container attach b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/262904119' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:47 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/262904119' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:47 compute-0 ceph-mon[74388]: osdmap e15: 2 total, 2 up, 2 in
Dec 09 12:03:47 compute-0 ceph-mon[74388]: pgmap v66: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:47 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 12:03:47 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1585520832' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:47 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 09 12:03:47 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1585520832' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:47 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec 09 12:03:47 compute-0 zen_jackson[85618]: pool 'images' created
Dec 09 12:03:47 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec 09 12:03:47 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:47 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:47 compute-0 systemd[1]: libpod-c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a.scope: Deactivated successfully.
Dec 09 12:03:47 compute-0 podman[85594]: 2025-12-09 12:03:47.324249549 +0000 UTC m=+0.623916123 container died c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a (image=quay.io/ceph/ceph:v19, name=zen_jackson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 09 12:03:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-93d92368aece4c2697429de3eb48095b10fa47c52ecede8592477072e7c4dd25-merged.mount: Deactivated successfully.
Dec 09 12:03:47 compute-0 podman[85594]: 2025-12-09 12:03:47.365771192 +0000 UTC m=+0.665437736 container remove c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a (image=quay.io/ceph/ceph:v19, name=zen_jackson, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:03:47 compute-0 systemd[1]: libpod-conmon-c0b95cf3a46d25238ab1874875bdd4446b8d62a0f5f189fd99dd174baaa2938a.scope: Deactivated successfully.
Dec 09 12:03:47 compute-0 sudo[85547]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:47 compute-0 mystifying_kalam[85679]: --> passed data devices: 0 physical, 1 LVM
Dec 09 12:03:47 compute-0 mystifying_kalam[85679]: --> All data devices are unavailable
Dec 09 12:03:47 compute-0 systemd[1]: libpod-b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a.scope: Deactivated successfully.
Dec 09 12:03:47 compute-0 podman[85654]: 2025-12-09 12:03:47.425774564 +0000 UTC m=+0.477597409 container died b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:03:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a320d7e132195679a7eef37898e50193db2d9aa9c391f02377b5fe1d86627894-merged.mount: Deactivated successfully.
Dec 09 12:03:47 compute-0 podman[85654]: 2025-12-09 12:03:47.461239558 +0000 UTC m=+0.513062403 container remove b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kalam, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:03:47 compute-0 systemd[1]: libpod-conmon-b080d36ca1f0e0b14af48d1ac92710b251666b0d2365fcb2e629ff950af8435a.scope: Deactivated successfully.
Dec 09 12:03:47 compute-0 sudo[85480]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:47 compute-0 sudo[85748]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmowayepbjohqdcbgxboxzzrszpvjyx ; /usr/bin/python3'
Dec 09 12:03:47 compute-0 sudo[85748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:47 compute-0 sudo[85739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:03:47 compute-0 sudo[85739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:47 compute-0 sudo[85739]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:47 compute-0 sudo[85770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- lvm list --format json
Dec 09 12:03:47 compute-0 sudo[85770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:47 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 5 completed events
Dec 09 12:03:47 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:03:47 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:47 compute-0 python3[85767]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:47 compute-0 podman[85795]: 2025-12-09 12:03:47.722495199 +0000 UTC m=+0.043906054 container create 4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a (image=quay.io/ceph/ceph:v19, name=quizzical_poitras, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 09 12:03:47 compute-0 systemd[1]: Started libpod-conmon-4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a.scope.
Dec 09 12:03:47 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:47 compute-0 podman[85795]: 2025-12-09 12:03:47.701587682 +0000 UTC m=+0.022998547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2e8d648c26090a0b3fc5bccd33d70a445db5b5011671205276ec67e438cbef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2e8d648c26090a0b3fc5bccd33d70a445db5b5011671205276ec67e438cbef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:47 compute-0 podman[85795]: 2025-12-09 12:03:47.807272782 +0000 UTC m=+0.128683617 container init 4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a (image=quay.io/ceph/ceph:v19, name=quizzical_poitras, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 09 12:03:47 compute-0 podman[85795]: 2025-12-09 12:03:47.815466821 +0000 UTC m=+0.136877646 container start 4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a (image=quay.io/ceph/ceph:v19, name=quizzical_poitras, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 12:03:47 compute-0 podman[85795]: 2025-12-09 12:03:47.819098121 +0000 UTC m=+0.140508936 container attach 4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a (image=quay.io/ceph/ceph:v19, name=quizzical_poitras, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 09 12:03:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "b8d22383-70ff-495f-84df-81e3540b751b"} v 0)
Dec 09 12:03:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b8d22383-70ff-495f-84df-81e3540b751b"}]: dispatch
Dec 09 12:03:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 09 12:03:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b8d22383-70ff-495f-84df-81e3540b751b"}]': finished
Dec 09 12:03:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Dec 09 12:03:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Dec 09 12:03:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:48 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:48 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:48 compute-0 podman[85872]: 2025-12-09 12:03:48.065806024 +0000 UTC m=+0.039958714 container create bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 12:03:48 compute-0 systemd[1]: Started libpod-conmon-bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e.scope.
Dec 09 12:03:48 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:48 compute-0 podman[85872]: 2025-12-09 12:03:48.13054424 +0000 UTC m=+0.104696960 container init bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 09 12:03:48 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1585520832' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:48 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1585520832' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:48 compute-0 ceph-mon[74388]: osdmap e16: 2 total, 2 up, 2 in
Dec 09 12:03:48 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:48 compute-0 ceph-mon[74388]: from='client.? 192.168.122.102:0/4095674672' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b8d22383-70ff-495f-84df-81e3540b751b"}]: dispatch
Dec 09 12:03:48 compute-0 ceph-mon[74388]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b8d22383-70ff-495f-84df-81e3540b751b"}]: dispatch
Dec 09 12:03:48 compute-0 ceph-mon[74388]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b8d22383-70ff-495f-84df-81e3540b751b"}]': finished
Dec 09 12:03:48 compute-0 ceph-mon[74388]: osdmap e17: 3 total, 2 up, 3 in
Dec 09 12:03:48 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:48 compute-0 podman[85872]: 2025-12-09 12:03:48.136570798 +0000 UTC m=+0.110723498 container start bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 12:03:48 compute-0 podman[85872]: 2025-12-09 12:03:48.139361489 +0000 UTC m=+0.113514189 container attach bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lederberg, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:03:48 compute-0 happy_lederberg[85889]: 167 167
Dec 09 12:03:48 compute-0 systemd[1]: libpod-bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e.scope: Deactivated successfully.
Dec 09 12:03:48 compute-0 podman[85872]: 2025-12-09 12:03:48.140986422 +0000 UTC m=+0.115139122 container died bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 09 12:03:48 compute-0 podman[85872]: 2025-12-09 12:03:48.050491371 +0000 UTC m=+0.024644091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a4bac49a4d6d1d2d957be0edfb793b933340225b7714680c84befc9ecb15765-merged.mount: Deactivated successfully.
Dec 09 12:03:48 compute-0 podman[85872]: 2025-12-09 12:03:48.16862048 +0000 UTC m=+0.142773180 container remove bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lederberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:03:48 compute-0 systemd[1]: libpod-conmon-bb065783c26ffbf1c4e795c082c624c089d21387ce3460b717d289c82455f60e.scope: Deactivated successfully.
Dec 09 12:03:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 12:03:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2954890397' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:48 compute-0 podman[85915]: 2025-12-09 12:03:48.305458364 +0000 UTC m=+0.036618303 container create 147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_keldysh, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:48 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v69: 5 pgs: 2 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:48 compute-0 systemd[1]: Started libpod-conmon-147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9.scope.
Dec 09 12:03:48 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5a5a463c000bfda24f95ff6f17026d5d2f082d52ab872db7323d4b842d6904/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5a5a463c000bfda24f95ff6f17026d5d2f082d52ab872db7323d4b842d6904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5a5a463c000bfda24f95ff6f17026d5d2f082d52ab872db7323d4b842d6904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5a5a463c000bfda24f95ff6f17026d5d2f082d52ab872db7323d4b842d6904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:48 compute-0 podman[85915]: 2025-12-09 12:03:48.289855302 +0000 UTC m=+0.021015271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:48 compute-0 podman[85915]: 2025-12-09 12:03:48.392988059 +0000 UTC m=+0.124148088 container init 147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 09 12:03:48 compute-0 podman[85915]: 2025-12-09 12:03:48.399509634 +0000 UTC m=+0.130669603 container start 147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_keldysh, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 12:03:48 compute-0 podman[85915]: 2025-12-09 12:03:48.403250447 +0000 UTC m=+0.134410406 container attach 147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_keldysh, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 09 12:03:48 compute-0 focused_keldysh[85932]: {
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:     "1": [
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:         {
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "devices": [
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "/dev/loop3"
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             ],
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "lv_name": "ceph_lv0",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "lv_size": "21470642176",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=750b57e3-924f-51a5-ab09-01517535f732,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0cb4756c-1cb3-414f-a66b-4ca287023452,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "lv_uuid": "NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "name": "ceph_lv0",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "tags": {
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.block_uuid": "NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.cluster_fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.cluster_name": "ceph",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.crush_device_class": "",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.encrypted": "0",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.osd_fsid": "0cb4756c-1cb3-414f-a66b-4ca287023452",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.osd_id": "1",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.type": "block",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.vdo": "0",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:                 "ceph.with_tpm": "0"
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             },
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "type": "block",
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:             "vg_name": "ceph_vg0"
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:         }
Dec 09 12:03:48 compute-0 focused_keldysh[85932]:     ]
Dec 09 12:03:48 compute-0 focused_keldysh[85932]: }
Dec 09 12:03:48 compute-0 systemd[1]: libpod-147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9.scope: Deactivated successfully.
Dec 09 12:03:48 compute-0 podman[85915]: 2025-12-09 12:03:48.670724712 +0000 UTC m=+0.401884661 container died 147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-db5a5a463c000bfda24f95ff6f17026d5d2f082d52ab872db7323d4b842d6904-merged.mount: Deactivated successfully.
Dec 09 12:03:48 compute-0 podman[85915]: 2025-12-09 12:03:48.712062469 +0000 UTC m=+0.443222418 container remove 147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 09 12:03:48 compute-0 systemd[1]: libpod-conmon-147673a599ceefe2462689489cd098fe1c12b5531356b07dc062bb76320ae1b9.scope: Deactivated successfully.
Dec 09 12:03:48 compute-0 sudo[85770]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:48 compute-0 sudo[85953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:03:48 compute-0 sudo[85953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:48 compute-0 sudo[85953]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:48 compute-0 sudo[85978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- raw list --format json
Dec 09 12:03:48 compute-0 sudo[85978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:03:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 09 12:03:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2954890397' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Dec 09 12:03:49 compute-0 quizzical_poitras[85810]: pool 'cephfs.cephfs.meta' created
Dec 09 12:03:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Dec 09 12:03:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:49 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:49 compute-0 systemd[1]: libpod-4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a.scope: Deactivated successfully.
Dec 09 12:03:49 compute-0 podman[85795]: 2025-12-09 12:03:49.041281892 +0000 UTC m=+1.362692717 container died 4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a (image=quay.io/ceph/ceph:v19, name=quizzical_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa2e8d648c26090a0b3fc5bccd33d70a445db5b5011671205276ec67e438cbef-merged.mount: Deactivated successfully.
Dec 09 12:03:49 compute-0 podman[85795]: 2025-12-09 12:03:49.078573526 +0000 UTC m=+1.399984341 container remove 4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a (image=quay.io/ceph/ceph:v19, name=quizzical_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:49 compute-0 systemd[1]: libpod-conmon-4610be8290e3cdf42056c36c9386bdfaf71fcad68bdf730a88d76d6cf2425a9a.scope: Deactivated successfully.
Dec 09 12:03:49 compute-0 sudo[85748]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:49 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2954890397' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:49 compute-0 ceph-mon[74388]: pgmap v69: 5 pgs: 2 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:49 compute-0 ceph-mon[74388]: from='client.? 192.168.122.102:0/4285368694' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 09 12:03:49 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2954890397' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:49 compute-0 ceph-mon[74388]: osdmap e18: 3 total, 2 up, 3 in
Dec 09 12:03:49 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:49 compute-0 sudo[86086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlzwgditggqddhckszfhuewyxlhuuhru ; /usr/bin/python3'
Dec 09 12:03:49 compute-0 sudo[86086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:49 compute-0 podman[86064]: 2025-12-09 12:03:49.268411762 +0000 UTC m=+0.040052597 container create 8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chebyshev, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 09 12:03:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.hvlbot started
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mgr.compute-2.hvlbot 192.168.122.102:0/1608003954; not ready for session (expect reconnect)
Dec 09 12:03:49 compute-0 systemd[1]: Started libpod-conmon-8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e.scope.
Dec 09 12:03:49 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:49 compute-0 podman[86064]: 2025-12-09 12:03:49.346172446 +0000 UTC m=+0.117813551 container init 8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 09 12:03:49 compute-0 podman[86064]: 2025-12-09 12:03:49.252470358 +0000 UTC m=+0.024111213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:49 compute-0 podman[86064]: 2025-12-09 12:03:49.353579509 +0000 UTC m=+0.125220344 container start 8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chebyshev, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 09 12:03:49 compute-0 podman[86064]: 2025-12-09 12:03:49.356982141 +0000 UTC m=+0.128622996 container attach 8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chebyshev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:03:49 compute-0 zen_chebyshev[86098]: 167 167
Dec 09 12:03:49 compute-0 systemd[1]: libpod-8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e.scope: Deactivated successfully.
Dec 09 12:03:49 compute-0 conmon[86098]: conmon 8ada132a10652dda2d3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e.scope/container/memory.events
Dec 09 12:03:49 compute-0 podman[86064]: 2025-12-09 12:03:49.359275866 +0000 UTC m=+0.130916701 container died 8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 09 12:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-37eaa80cc6a1bf878a480130573a5bd4f310386d5535e1159edf67a46da05b90-merged.mount: Deactivated successfully.
Dec 09 12:03:49 compute-0 python3[86092]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:49 compute-0 podman[86064]: 2025-12-09 12:03:49.392013881 +0000 UTC m=+0.163654716 container remove 8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:49 compute-0 systemd[1]: libpod-conmon-8ada132a10652dda2d3ab21e6c9d2f697bec27fed0e3614bd2102fe9f4aa173e.scope: Deactivated successfully.
Dec 09 12:03:49 compute-0 podman[86115]: 2025-12-09 12:03:49.4431188 +0000 UTC m=+0.036542221 container create f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4 (image=quay.io/ceph/ceph:v19, name=elastic_borg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 09 12:03:49 compute-0 systemd[1]: Started libpod-conmon-f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4.scope.
Dec 09 12:03:49 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6532fcbba30a72530a588b332be7c2bd8f711b43b87277881b82e105dd3d55a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6532fcbba30a72530a588b332be7c2bd8f711b43b87277881b82e105dd3d55a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:49 compute-0 podman[86115]: 2025-12-09 12:03:49.515800797 +0000 UTC m=+0.109224208 container init f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4 (image=quay.io/ceph/ceph:v19, name=elastic_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:49 compute-0 podman[86115]: 2025-12-09 12:03:49.520971497 +0000 UTC m=+0.114394918 container start f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4 (image=quay.io/ceph/ceph:v19, name=elastic_borg, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 09 12:03:49 compute-0 podman[86115]: 2025-12-09 12:03:49.427626691 +0000 UTC m=+0.021050132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:49 compute-0 podman[86115]: 2025-12-09 12:03:49.526630033 +0000 UTC m=+0.120053474 container attach f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4 (image=quay.io/ceph/ceph:v19, name=elastic_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 09 12:03:49 compute-0 podman[86140]: 2025-12-09 12:03:49.536988063 +0000 UTC m=+0.033829892 container create c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:03:49 compute-0 systemd[1]: Started libpod-conmon-c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5.scope.
Dec 09 12:03:49 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234c9e7bd5ce358c817140c9853a95c28faa744d4c0afebcb02dac772dde77b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234c9e7bd5ce358c817140c9853a95c28faa744d4c0afebcb02dac772dde77b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234c9e7bd5ce358c817140c9853a95c28faa744d4c0afebcb02dac772dde77b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234c9e7bd5ce358c817140c9853a95c28faa744d4c0afebcb02dac772dde77b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:49 compute-0 podman[86140]: 2025-12-09 12:03:49.607287842 +0000 UTC m=+0.104129681 container init c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 12:03:49 compute-0 podman[86140]: 2025-12-09 12:03:49.616710431 +0000 UTC m=+0.113552270 container start c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:49 compute-0 podman[86140]: 2025-12-09 12:03:49.523157809 +0000 UTC m=+0.019999658 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:03:49 compute-0 podman[86140]: 2025-12-09 12:03:49.620228517 +0000 UTC m=+0.117070356 container attach c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_napier, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 09 12:03:49 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 18 pg[6.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [balancer INFO root] Optimize plan auto_2025-12-09_12:03:49
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Dec 09 12:03:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 09 12:03:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1107428714' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 12:03:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:03:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:03:49 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:03:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 09 12:03:50 compute-0 ceph-mon[74388]: Standby manager daemon compute-2.hvlbot started
Dec 09 12:03:50 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1107428714' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 09 12:03:50 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1107428714' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Dec 09 12:03:50 compute-0 elastic_borg[86133]: pool 'cephfs.cephfs.data' created
Dec 09 12:03:50 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 19 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wfxreg(active, since 2m), standbys: compute-2.hvlbot
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Dec 09 12:03:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"} v 0)
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"}]: dispatch
Dec 09 12:03:50 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev f50996d6-795f-4780-a71c-7a84ad0a53d7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 09 12:03:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:50 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:50 compute-0 systemd[1]: libpod-f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4.scope: Deactivated successfully.
Dec 09 12:03:50 compute-0 podman[86115]: 2025-12-09 12:03:50.210418981 +0000 UTC m=+0.803842402 container died f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4 (image=quay.io/ceph/ceph:v19, name=elastic_borg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6532fcbba30a72530a588b332be7c2bd8f711b43b87277881b82e105dd3d55a9-merged.mount: Deactivated successfully.
Dec 09 12:03:50 compute-0 lvm[86261]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:03:50 compute-0 lvm[86261]: VG ceph_vg0 finished
Dec 09 12:03:50 compute-0 podman[86115]: 2025-12-09 12:03:50.25179896 +0000 UTC m=+0.845222381 container remove f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4 (image=quay.io/ceph/ceph:v19, name=elastic_borg, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 12:03:50 compute-0 systemd[1]: libpod-conmon-f7c5337e528090891be73ef063da0de7d9274ac74327e1e75cb97d95bedf00a4.scope: Deactivated successfully.
Dec 09 12:03:50 compute-0 sudo[86086]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:50 compute-0 mystifying_napier[86157]: {}
Dec 09 12:03:50 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 4 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:50 compute-0 systemd[1]: libpod-c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5.scope: Deactivated successfully.
Dec 09 12:03:50 compute-0 systemd[1]: libpod-c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5.scope: Consumed 1.060s CPU time.
Dec 09 12:03:50 compute-0 podman[86271]: 2025-12-09 12:03:50.365150404 +0000 UTC m=+0.019856194 container died c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c234c9e7bd5ce358c817140c9853a95c28faa744d4c0afebcb02dac772dde77b-merged.mount: Deactivated successfully.
Dec 09 12:03:50 compute-0 podman[86271]: 2025-12-09 12:03:50.396529155 +0000 UTC m=+0.051234935 container remove c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_napier, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 09 12:03:50 compute-0 systemd[1]: libpod-conmon-c22f2a638a323e612577a78df5b7cde64b86b01bccab7848cf1a8a343d29a1d5.scope: Deactivated successfully.
Dec 09 12:03:50 compute-0 sudo[85978]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:50 compute-0 sudo[86309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwpbzgpwbnyztpocbupoxhepzseydiql ; /usr/bin/python3'
Dec 09 12:03:50 compute-0 sudo[86309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:50 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.lorvly started
Dec 09 12:03:50 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from mgr.compute-1.lorvly 192.168.122.101:0/392921707; not ready for session (expect reconnect)
Dec 09 12:03:50 compute-0 python3[86311]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:50 compute-0 podman[86312]: 2025-12-09 12:03:50.785113647 +0000 UTC m=+0.040302175 container create 720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22 (image=quay.io/ceph/ceph:v19, name=stoic_jones, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 12:03:50 compute-0 systemd[1]: Started libpod-conmon-720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22.scope.
Dec 09 12:03:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ad631eba1195f82a3c77b749ca9ac23e1df18a18a8cb5ccf9aeecb3a8898a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ad631eba1195f82a3c77b749ca9ac23e1df18a18a8cb5ccf9aeecb3a8898a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:50 compute-0 podman[86312]: 2025-12-09 12:03:50.767766997 +0000 UTC m=+0.022955545 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:50 compute-0 podman[86312]: 2025-12-09 12:03:50.866850312 +0000 UTC m=+0.122038860 container init 720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22 (image=quay.io/ceph/ceph:v19, name=stoic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 09 12:03:50 compute-0 podman[86312]: 2025-12-09 12:03:50.875168545 +0000 UTC m=+0.130357063 container start 720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22 (image=quay.io/ceph/ceph:v19, name=stoic_jones, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:03:50 compute-0 podman[86312]: 2025-12-09 12:03:50.878487923 +0000 UTC m=+0.133676481 container attach 720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22 (image=quay.io/ceph/ceph:v19, name=stoic_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:03:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:03:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4152405834' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Dec 09 12:03:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:51 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:51 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev bca908ba-9a3f-45cf-ab74-0cd53e7688ab (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 09 12:03:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:51 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1107428714' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 09 12:03:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:51 compute-0 ceph-mon[74388]: mgrmap e9: compute-0.wfxreg(active, since 2m), standbys: compute-2.hvlbot
Dec 09 12:03:51 compute-0 ceph-mon[74388]: osdmap e19: 3 total, 2 up, 3 in
Dec 09 12:03:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"}]: dispatch
Dec 09 12:03:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:51 compute-0 ceph-mon[74388]: pgmap v72: 7 pgs: 4 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:51 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:51 compute-0 ceph-mon[74388]: Standby manager daemon compute-1.lorvly started
Dec 09 12:03:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.wfxreg(active, since 2m), standbys: compute-2.hvlbot, compute-1.lorvly
Dec 09 12:03:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"} v 0)
Dec 09 12:03:51 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"}]: dispatch
Dec 09 12:03:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 09 12:03:52 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v74: 38 pgs: 1 creating+peering, 31 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:03:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4152405834' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 09 12:03:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Dec 09 12:03:52 compute-0 stoic_jones[86328]: enabled application 'rbd' on pool 'vms'
Dec 09 12:03:52 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Dec 09 12:03:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:52 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:52 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:52 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 7d014a93-8a7f-41ee-a82c-4decd08b6dc8 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 09 12:03:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:03:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:52 compute-0 systemd[1]: libpod-720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22.scope: Deactivated successfully.
Dec 09 12:03:52 compute-0 podman[86312]: 2025-12-09 12:03:52.427210709 +0000 UTC m=+1.682399257 container died 720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22 (image=quay.io/ceph/ceph:v19, name=stoic_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ad631eba1195f82a3c77b749ca9ac23e1df18a18a8cb5ccf9aeecb3a8898a4-merged.mount: Deactivated successfully.
Dec 09 12:03:52 compute-0 ceph-mon[74388]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:03:52 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4152405834' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 09 12:03:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:52 compute-0 ceph-mon[74388]: osdmap e20: 3 total, 2 up, 3 in
Dec 09 12:03:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:52 compute-0 ceph-mon[74388]: mgrmap e10: compute-0.wfxreg(active, since 2m), standbys: compute-2.hvlbot, compute-1.lorvly
Dec 09 12:03:52 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"}]: dispatch
Dec 09 12:03:52 compute-0 podman[86312]: 2025-12-09 12:03:52.463286544 +0000 UTC m=+1.718475092 container remove 720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22 (image=quay.io/ceph/ceph:v19, name=stoic_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 12:03:52 compute-0 systemd[1]: libpod-conmon-720982f63ba210b52a00c42e33bde548c0306bc285287e5de18c9ae21a577f22.scope: Deactivated successfully.
Dec 09 12:03:52 compute-0 sudo[86309]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:52 compute-0 sudo[86388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firlvlawmnlmhvwbqnjjpwfcyecywwhf ; /usr/bin/python3'
Dec 09 12:03:52 compute-0 sudo[86388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:52 compute-0 ceph-mgr[74679]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Dec 09 12:03:52 compute-0 python3[86390]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:52 compute-0 podman[86391]: 2025-12-09 12:03:52.810675594 +0000 UTC m=+0.044452031 container create b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5 (image=quay.io/ceph/ceph:v19, name=sad_moore, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 09 12:03:52 compute-0 systemd[1]: Started libpod-conmon-b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5.scope.
Dec 09 12:03:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c952e01ef35171b9263059f44744d2dc5a2f27ff279b51f064185bf32e1002/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c952e01ef35171b9263059f44744d2dc5a2f27ff279b51f064185bf32e1002/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:52 compute-0 podman[86391]: 2025-12-09 12:03:52.882904456 +0000 UTC m=+0.116680903 container init b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5 (image=quay.io/ceph/ceph:v19, name=sad_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 12:03:52 compute-0 podman[86391]: 2025-12-09 12:03:52.789703246 +0000 UTC m=+0.023479733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:52 compute-0 podman[86391]: 2025-12-09 12:03:52.888110448 +0000 UTC m=+0.121886885 container start b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5 (image=quay.io/ceph/ceph:v19, name=sad_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 09 12:03:52 compute-0 podman[86391]: 2025-12-09 12:03:52.892037406 +0000 UTC m=+0.125813863 container attach b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5 (image=quay.io/ceph/ceph:v19, name=sad_moore, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:03:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2088352676' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2088352676' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 09 12:03:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Dec 09 12:03:53 compute-0 sad_moore[86406]: enabled application 'rbd' on pool 'volumes'
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Dec 09 12:03:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:53 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev f8d1e584-9887-4a05-80a4-6ec12cf0aed4 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 09 12:03:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:53 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22 pruub=8.880224228s) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active pruub 57.427276611s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:53 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22 pruub=8.880224228s) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown pruub 57.427276611s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:53 compute-0 systemd[1]: libpod-b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5.scope: Deactivated successfully.
Dec 09 12:03:53 compute-0 podman[86391]: 2025-12-09 12:03:53.433374496 +0000 UTC m=+0.667150923 container died b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5 (image=quay.io/ceph/ceph:v19, name=sad_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 09 12:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-63c952e01ef35171b9263059f44744d2dc5a2f27ff279b51f064185bf32e1002-merged.mount: Deactivated successfully.
Dec 09 12:03:53 compute-0 ceph-mon[74388]: pgmap v74: 38 pgs: 1 creating+peering, 31 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/4152405834' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:53 compute-0 ceph-mon[74388]: osdmap e21: 3 total, 2 up, 3 in
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2088352676' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2088352676' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 09 12:03:53 compute-0 ceph-mon[74388]: osdmap e22: 3 total, 2 up, 3 in
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:03:53 compute-0 podman[86391]: 2025-12-09 12:03:53.481305901 +0000 UTC m=+0.715082348 container remove b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5 (image=quay.io/ceph/ceph:v19, name=sad_moore, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:53 compute-0 systemd[1]: libpod-conmon-b3cdbfcffae9e6f2d362679991c136ed3b04ea160ee9e2321d8baed92f92cfe5.scope: Deactivated successfully.
Dec 09 12:03:53 compute-0 sudo[86388]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:03:53 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:53 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec 09 12:03:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec 09 12:03:53 compute-0 sudo[86466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caszcqkvigutomwrohmnkjwvingbqjwt ; /usr/bin/python3'
Dec 09 12:03:53 compute-0 sudo[86466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:53 compute-0 python3[86468]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:53 compute-0 podman[86469]: 2025-12-09 12:03:53.842836115 +0000 UTC m=+0.037301206 container create 2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee (image=quay.io/ceph/ceph:v19, name=nifty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 09 12:03:53 compute-0 systemd[1]: Started libpod-conmon-2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee.scope.
Dec 09 12:03:53 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb755e34143de4348a20e8af6fe5cd1da12a483e41865d1dd568d06fdb237193/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb755e34143de4348a20e8af6fe5cd1da12a483e41865d1dd568d06fdb237193/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:53 compute-0 podman[86469]: 2025-12-09 12:03:53.8268633 +0000 UTC m=+0.021328411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:53 compute-0 podman[86469]: 2025-12-09 12:03:53.926337657 +0000 UTC m=+0.120802788 container init 2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee (image=quay.io/ceph/ceph:v19, name=nifty_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 12:03:53 compute-0 podman[86469]: 2025-12-09 12:03:53.933569284 +0000 UTC m=+0.128034375 container start 2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee (image=quay.io/ceph/ceph:v19, name=nifty_cori, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 09 12:03:53 compute-0 podman[86469]: 2025-12-09 12:03:53.936538492 +0000 UTC m=+0.131003633 container attach 2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee (image=quay.io/ceph/ceph:v19, name=nifty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 09 12:03:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/400489904' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v77: 69 pgs: 1 creating+peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/400489904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Dec 09 12:03:54 compute-0 nifty_cori[86485]: enabled application 'rbd' on pool 'backups'
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Dec 09 12:03:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:54 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev d376ff0d-bb04-473c-9ae7-17c2c5ef47c7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev f50996d6-795f-4780-a71c-7a84ad0a53d7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event f50996d6-795f-4780-a71c-7a84ad0a53d7 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 4 seconds
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev bca908ba-9a3f-45cf-ab74-0cd53e7688ab (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event bca908ba-9a3f-45cf-ab74-0cd53e7688ab (PG autoscaler increasing pool 3 PGs from 1 to 32) in 3 seconds
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 7d014a93-8a7f-41ee-a82c-4decd08b6dc8 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 7d014a93-8a7f-41ee-a82c-4decd08b6dc8 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 2 seconds
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev f8d1e584-9887-4a05-80a4-6ec12cf0aed4 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event f8d1e584-9887-4a05-80a4-6ec12cf0aed4 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 1 seconds
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev d376ff0d-bb04-473c-9ae7-17c2c5ef47c7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 09 12:03:54 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event d376ff0d-bb04-473c-9ae7-17c2c5ef47c7 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 0 seconds
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.19( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.18( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.17( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.14( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.16( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.13( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.12( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.11( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.10( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.15( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.f( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.e( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.d( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.c( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.b( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.a( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.7( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23 pruub=8.885478973s) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active pruub 58.443725586s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.6( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.5( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.2( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.4( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.3( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.8( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23 pruub=9.586886406s) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active pruub 59.145511627s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1a( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.9( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1b( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1c( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1d( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1e( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1f( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23 pruub=8.885478973s) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown pruub 58.443725586s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23 pruub=9.586886406s) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown pruub 59.145511627s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.18( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.19( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.17( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.14( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.13( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.12( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.10( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=22/23 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.7( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.6( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.2( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.4( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1e( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 systemd[1]: libpod-2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee.scope: Deactivated successfully.
Dec 09 12:03:54 compute-0 podman[86469]: 2025-12-09 12:03:54.439816891 +0000 UTC m=+0.634281992 container died 2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee (image=quay.io/ceph/ceph:v19, name=nifty_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.1f( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 23 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb755e34143de4348a20e8af6fe5cd1da12a483e41865d1dd568d06fdb237193-merged.mount: Deactivated successfully.
Dec 09 12:03:54 compute-0 podman[86469]: 2025-12-09 12:03:54.48238029 +0000 UTC m=+0.676845391 container remove 2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee (image=quay.io/ceph/ceph:v19, name=nifty_cori, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:54 compute-0 systemd[1]: libpod-conmon-2c967794d841600bc2f0a54ee4a534aad50ecb970748af5e1790b573127d83ee.scope: Deactivated successfully.
Dec 09 12:03:54 compute-0 ceph-mon[74388]: 2.1f deep-scrub starts
Dec 09 12:03:54 compute-0 ceph-mon[74388]: 2.1f deep-scrub ok
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mon[74388]: Deploying daemon osd.2 on compute-2
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/400489904' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/400489904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:54 compute-0 ceph-mon[74388]: osdmap e23: 3 total, 2 up, 3 in
Dec 09 12:03:54 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:54 compute-0 sudo[86466]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:54 compute-0 sudo[86543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkixbzoulrckoncultgicvrwusneemue ; /usr/bin/python3'
Dec 09 12:03:54 compute-0 sudo[86543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:54 compute-0 python3[86545]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:54 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 09 12:03:54 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 09 12:03:54 compute-0 podman[86546]: 2025-12-09 12:03:54.828556149 +0000 UTC m=+0.041790453 container create 0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648 (image=quay.io/ceph/ceph:v19, name=gifted_jennings, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:03:54 compute-0 systemd[1]: Started libpod-conmon-0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648.scope.
Dec 09 12:03:54 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3905005af7419d7c11003a85ee02f1dbee17e9ed02bf4b08bf3ab7aa830830bf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3905005af7419d7c11003a85ee02f1dbee17e9ed02bf4b08bf3ab7aa830830bf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:54 compute-0 podman[86546]: 2025-12-09 12:03:54.889030126 +0000 UTC m=+0.102264420 container init 0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648 (image=quay.io/ceph/ceph:v19, name=gifted_jennings, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:03:54 compute-0 podman[86546]: 2025-12-09 12:03:54.894115913 +0000 UTC m=+0.107350207 container start 0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648 (image=quay.io/ceph/ceph:v19, name=gifted_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 12:03:54 compute-0 podman[86546]: 2025-12-09 12:03:54.903494901 +0000 UTC m=+0.116729205 container attach 0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648 (image=quay.io/ceph/ceph:v19, name=gifted_jennings, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 12:03:54 compute-0 podman[86546]: 2025-12-09 12:03:54.81395408 +0000 UTC m=+0.027188394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:55 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 09 12:03:55 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3519615890' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 09 12:03:55 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 09 12:03:55 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3519615890' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 09 12:03:55 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Dec 09 12:03:55 compute-0 gifted_jennings[86561]: enabled application 'rbd' on pool 'images'
Dec 09 12:03:55 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Dec 09 12:03:55 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:55 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:55 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1f( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1e( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1f( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.11( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.10( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.10( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.11( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.13( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.12( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.13( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.12( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.15( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.14( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.14( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.15( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1e( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.17( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.16( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.16( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.17( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.9( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.8( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.8( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.9( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.b( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.a( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.a( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.b( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.d( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.c( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.c( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.d( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.6( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.7( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.3( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.2( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.7( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.6( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.4( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.5( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.4( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.2( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.3( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.e( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.f( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.f( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.5( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.e( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1c( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1d( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1d( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1c( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1a( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1b( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1b( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.18( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.19( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1a( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.18( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.19( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.11( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.10( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.11( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.12( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.14( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.17( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.16( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.17( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.16( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.7( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.3( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.7( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.4( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.1a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 24 pg[5.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:55 compute-0 systemd[1]: libpod-0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648.scope: Deactivated successfully.
Dec 09 12:03:55 compute-0 podman[86546]: 2025-12-09 12:03:55.456916257 +0000 UTC m=+0.670150561 container died 0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648 (image=quay.io/ceph/ceph:v19, name=gifted_jennings, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3905005af7419d7c11003a85ee02f1dbee17e9ed02bf4b08bf3ab7aa830830bf-merged.mount: Deactivated successfully.
Dec 09 12:03:55 compute-0 podman[86546]: 2025-12-09 12:03:55.496318001 +0000 UTC m=+0.709552295 container remove 0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648 (image=quay.io/ceph/ceph:v19, name=gifted_jennings, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:03:55 compute-0 ceph-mon[74388]: pgmap v77: 69 pgs: 1 creating+peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:55 compute-0 ceph-mon[74388]: 2.a scrub starts
Dec 09 12:03:55 compute-0 ceph-mon[74388]: 2.a scrub ok
Dec 09 12:03:55 compute-0 ceph-mon[74388]: 3.18 scrub starts
Dec 09 12:03:55 compute-0 ceph-mon[74388]: 3.18 scrub ok
Dec 09 12:03:55 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3519615890' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 09 12:03:55 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3519615890' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 09 12:03:55 compute-0 ceph-mon[74388]: osdmap e24: 3 total, 2 up, 3 in
Dec 09 12:03:55 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:55 compute-0 systemd[1]: libpod-conmon-0e66b6a3366709105dec1b28249eae16cc5ab6470796adf489c40fab1615d648.scope: Deactivated successfully.
Dec 09 12:03:55 compute-0 sudo[86543]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:55 compute-0 sudo[86620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywcxpjbfhxnudndpogtipoiegqldnqbj ; /usr/bin/python3'
Dec 09 12:03:55 compute-0 sudo[86620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:55 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 09 12:03:55 compute-0 python3[86622]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:55 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 09 12:03:55 compute-0 podman[86623]: 2025-12-09 12:03:55.850162783 +0000 UTC m=+0.038444284 container create c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5 (image=quay.io/ceph/ceph:v19, name=intelligent_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 12:03:55 compute-0 systemd[75771]: Starting Mark boot as successful...
Dec 09 12:03:55 compute-0 systemd[75771]: Finished Mark boot as successful.
Dec 09 12:03:55 compute-0 systemd[1]: Started libpod-conmon-c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5.scope.
Dec 09 12:03:55 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052ca2bd14292c0b6e3951d5945a543860d7f79963520259683ba8a1d4f0cdd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052ca2bd14292c0b6e3951d5945a543860d7f79963520259683ba8a1d4f0cdd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:55 compute-0 podman[86623]: 2025-12-09 12:03:55.912171519 +0000 UTC m=+0.100453030 container init c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5 (image=quay.io/ceph/ceph:v19, name=intelligent_blackwell, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:03:55 compute-0 podman[86623]: 2025-12-09 12:03:55.917623188 +0000 UTC m=+0.105904689 container start c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5 (image=quay.io/ceph/ceph:v19, name=intelligent_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 09 12:03:55 compute-0 podman[86623]: 2025-12-09 12:03:55.920743181 +0000 UTC m=+0.109024712 container attach c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5 (image=quay.io/ceph/ceph:v19, name=intelligent_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 09 12:03:55 compute-0 podman[86623]: 2025-12-09 12:03:55.835044366 +0000 UTC m=+0.023325887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3702886328' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v80: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 09 12:03:56 compute-0 ceph-mon[74388]: 2.1e scrub starts
Dec 09 12:03:56 compute-0 ceph-mon[74388]: 2.1e scrub ok
Dec 09 12:03:56 compute-0 ceph-mon[74388]: 3.19 scrub starts
Dec 09 12:03:56 compute-0 ceph-mon[74388]: 3.19 scrub ok
Dec 09 12:03:56 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3702886328' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: 2.1c deep-scrub starts
Dec 09 12:03:56 compute-0 ceph-mon[74388]: 2.1c deep-scrub ok
Dec 09 12:03:56 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mon[74388]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3702886328' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Dec 09 12:03:56 compute-0 intelligent_blackwell[86639]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Dec 09 12:03:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:56 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:56 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=0/0 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.911334991s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581916809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.911304474s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581916809s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.894814491s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.565788269s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.1d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.894770622s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565788269s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910847664s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581886292s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.1c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.894700050s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.565765381s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910758972s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581825256s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910828590s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581886292s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.1c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.894681931s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565765381s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910742760s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581825256s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910603523s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581832886s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910619736s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581855774s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910589218s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581832886s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910601616s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581855774s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.1a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.894421577s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.565734863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.1a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.894408226s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565734863s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910345078s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581710815s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910332680s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581710815s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910346031s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581733704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910327911s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581733704s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.894348145s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.565773010s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.9( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.894336700s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565773010s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910198212s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581665039s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910186768s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581665039s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910098076s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581657410s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910032272s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581626892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910065651s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581657410s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.910017967s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581626892s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.901878357s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.573509216s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.3( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.901865959s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.573509216s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909885406s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581604004s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909873962s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581604004s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909838676s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581588745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909827232s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581588745s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.7( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909734726s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581535339s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.7( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909722328s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581535339s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.893149376s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564994812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.5( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.893138885s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564994812s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25 pruub=9.651549339s) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active pruub 61.323429108s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909548759s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581466675s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909536362s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581466675s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909382820s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581398010s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909370422s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581398010s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909286499s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581336975s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909259796s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581336975s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892736435s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564941406s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.a( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892714500s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564941406s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909049034s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581314087s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.909033775s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581314087s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892458916s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564849854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908839226s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581253052s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.c( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892448425s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564849854s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908827782s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581253052s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892313957s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564826965s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.d( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892303467s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564826965s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908658028s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581230164s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908640862s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581230164s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892155647s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564773560s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.e( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892136574s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564773560s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908492088s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581214905s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892011642s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564758301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908479691s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581214905s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 conmon[86639]: conmon c6c84b28723dc7c6e2ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5.scope/container/memory.events
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.f( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891997337s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564758301s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908331871s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581161499s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908318520s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581161499s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.10( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891983032s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564872742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.10( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891970634s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564872742s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.16( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908249855s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581169128s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.16( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.908240318s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581169128s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891960144s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564956665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.907889366s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.580909729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.907875061s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.580909729s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.11( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891938210s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564956665s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.13( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891654968s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564704895s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.13( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891634941s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564704895s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.901428223s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.574562073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.901392937s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.574592590s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.901414871s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574562073s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 systemd[1]: libpod-c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5.scope: Deactivated successfully.
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.901376724s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574592590s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.14( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891422272s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564666748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.14( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891404152s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564666748s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892297745s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.565704346s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.15( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.892284393s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565704346s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891386986s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 active pruub 65.564842224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[3.16( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=25 pruub=13.891370773s) [0] r=-1 lpr=25 pi=[22,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564842224s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.901005745s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.574493408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.900984764s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574493408s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.11( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.900860786s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.574432373s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.11( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.900845528s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574432373s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.900743484s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.574356079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.900729179s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574356079s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.900568962s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.574256897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.900559425s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574256897s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.911001205s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 active pruub 66.581932068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[5.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=25 pruub=14.907973289s) [0] r=-1 lpr=25 pi=[23,25)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581932068s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:03:56 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 25 pg[6.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25 pruub=9.651549339s) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown pruub 61.323429108s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:56 compute-0 podman[86664]: 2025-12-09 12:03:56.5893415 +0000 UTC m=+0.026198191 container died c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5 (image=quay.io/ceph/ceph:v19, name=intelligent_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-052ca2bd14292c0b6e3951d5945a543860d7f79963520259683ba8a1d4f0cdd3-merged.mount: Deactivated successfully.
Dec 09 12:03:56 compute-0 podman[86664]: 2025-12-09 12:03:56.622684726 +0000 UTC m=+0.059541397 container remove c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5 (image=quay.io/ceph/ceph:v19, name=intelligent_blackwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:03:56 compute-0 systemd[1]: libpod-conmon-c6c84b28723dc7c6e2ab46f83a30f72d07d818d5b32d73a2eb71a641adf594c5.scope: Deactivated successfully.
Dec 09 12:03:56 compute-0 sudo[86620]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:56 compute-0 sudo[86702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvulhyekgpqeruvsovldrwziclhrkset ; /usr/bin/python3'
Dec 09 12:03:56 compute-0 sudo[86702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:56 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 09 12:03:56 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 09 12:03:56 compute-0 python3[86704]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:03:56 compute-0 podman[86705]: 2025-12-09 12:03:56.957921876 +0000 UTC m=+0.037926297 container create a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c (image=quay.io/ceph/ceph:v19, name=bold_ptolemy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 09 12:03:56 compute-0 systemd[1]: Started libpod-conmon-a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c.scope.
Dec 09 12:03:57 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c306c7cf7f685229431b479c0c3fd333872c9060b692699135fd7010aa5c75/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c306c7cf7f685229431b479c0c3fd333872c9060b692699135fd7010aa5c75/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:03:57 compute-0 podman[86705]: 2025-12-09 12:03:57.034430969 +0000 UTC m=+0.114435410 container init a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c (image=quay.io/ceph/ceph:v19, name=bold_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 12:03:57 compute-0 podman[86705]: 2025-12-09 12:03:56.941355272 +0000 UTC m=+0.021359713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:03:57 compute-0 podman[86705]: 2025-12-09 12:03:57.040543579 +0000 UTC m=+0.120548000 container start a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c (image=quay.io/ceph/ceph:v19, name=bold_ptolemy, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 09 12:03:57 compute-0 podman[86705]: 2025-12-09 12:03:57.044411777 +0000 UTC m=+0.124416268 container attach a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c (image=quay.io/ceph/ceph:v19, name=bold_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 09 12:03:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 09 12:03:57 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2255612717' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 09 12:03:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 09 12:03:57 compute-0 ceph-mon[74388]: pgmap v80: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:57 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3702886328' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 09 12:03:57 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:03:57 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:03:57 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:03:57 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:03:57 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:03:57 compute-0 ceph-mon[74388]: osdmap e25: 3 total, 2 up, 3 in
Dec 09 12:03:57 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:57 compute-0 ceph-mon[74388]: 3.1f scrub starts
Dec 09 12:03:57 compute-0 ceph-mon[74388]: 3.1f scrub ok
Dec 09 12:03:57 compute-0 ceph-mon[74388]: 2.1d scrub starts
Dec 09 12:03:57 compute-0 ceph-mon[74388]: 2.1d scrub ok
Dec 09 12:03:57 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2255612717' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 09 12:03:57 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2255612717' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 09 12:03:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Dec 09 12:03:57 compute-0 bold_ptolemy[86721]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 09 12:03:57 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Dec 09 12:03:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:03:57 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:57 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1a( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1b( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.18( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.19( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1f( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1e( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.c( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.d( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.6( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.7( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.4( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.3( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.2( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.5( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.f( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.e( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.9( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.b( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.a( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.14( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.8( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.17( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.15( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.16( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.11( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.10( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.13( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.12( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1d( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1c( empty local-lis/les=18/19 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.19( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.c( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.13( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.e( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.1( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.15( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.d( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.9( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.10( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.6( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.1f( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.a( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.18( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.1b( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.1e( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.7( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.6( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.4( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[2.4( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=25) [1] r=0 lpr=25 pi=[20,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.f( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.9( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.14( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.16( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.11( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.10( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.13( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 26 pg[6.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=18/18 les/c/f=19/19/0 sis=25) [1] r=0 lpr=25 pi=[18,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:03:57 compute-0 systemd[1]: libpod-a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c.scope: Deactivated successfully.
Dec 09 12:03:57 compute-0 conmon[86721]: conmon a1a662c6da47b2d48951 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c.scope/container/memory.events
Dec 09 12:03:57 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 10 completed events
Dec 09 12:03:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:03:57 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:57 compute-0 podman[86746]: 2025-12-09 12:03:57.672989131 +0000 UTC m=+0.025549029 container died a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c (image=quay.io/ceph/ceph:v19, name=bold_ptolemy, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-47c306c7cf7f685229431b479c0c3fd333872c9060b692699135fd7010aa5c75-merged.mount: Deactivated successfully.
Dec 09 12:03:57 compute-0 podman[86746]: 2025-12-09 12:03:57.709514611 +0000 UTC m=+0.062074499 container remove a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c (image=quay.io/ceph/ceph:v19, name=bold_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 09 12:03:57 compute-0 systemd[1]: libpod-conmon-a1a662c6da47b2d4895135fc27665e5db40f027dabc1c8edf3a62fb7f4a6758c.scope: Deactivated successfully.
Dec 09 12:03:57 compute-0 sudo[86702]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:57 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 09 12:03:57 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 09 12:03:58 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:03:58 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:58 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:03:58 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:58 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v83: 162 pgs: 31 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:58 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2255612717' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 09 12:03:58 compute-0 ceph-mon[74388]: osdmap e26: 3 total, 2 up, 3 in
Dec 09 12:03:58 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:03:58 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:58 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:58 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:03:58 compute-0 ceph-mon[74388]: 2.8 deep-scrub starts
Dec 09 12:03:58 compute-0 ceph-mon[74388]: 2.8 deep-scrub ok
Dec 09 12:03:58 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Dec 09 12:03:58 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Dec 09 12:03:58 compute-0 python3[86836]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 12:03:59 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec 09 12:03:59 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 09 12:03:59 compute-0 python3[86907]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765281838.6166081-37215-59292667370362/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:03:59 compute-0 ceph-mon[74388]: 5.19 scrub starts
Dec 09 12:03:59 compute-0 ceph-mon[74388]: 5.19 scrub ok
Dec 09 12:03:59 compute-0 ceph-mon[74388]: pgmap v83: 162 pgs: 31 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:03:59 compute-0 ceph-mon[74388]: 4.19 deep-scrub starts
Dec 09 12:03:59 compute-0 ceph-mon[74388]: 4.19 deep-scrub ok
Dec 09 12:03:59 compute-0 ceph-mon[74388]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec 09 12:03:59 compute-0 ceph-mon[74388]: Cluster is now healthy
Dec 09 12:03:59 compute-0 ceph-mon[74388]: 2.7 deep-scrub starts
Dec 09 12:03:59 compute-0 ceph-mon[74388]: 2.7 deep-scrub ok
Dec 09 12:03:59 compute-0 sudo[87007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzdlzxikhonwjljnfaannckchsjhhdfw ; /usr/bin/python3'
Dec 09 12:03:59 compute-0 sudo[87007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:03:59 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 09 12:03:59 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 09 12:03:59 compute-0 python3[87009]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 12:03:59 compute-0 sudo[87007]: pam_unix(sudo:session): session closed for user root
Dec 09 12:03:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 09 12:03:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 09 12:04:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:00 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:00 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:00 compute-0 sudo[87059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 09 12:04:00 compute-0 sudo[87059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:00 compute-0 sudo[87059]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:00 compute-0 sudo[87107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sksmfiqvfdfpvpumxklgtrzmkkpoioxj ; /usr/bin/python3'
Dec 09 12:04:00 compute-0 sudo[87107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:00 compute-0 python3[87109]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765281839.6364324-37229-279328918293468/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=919db272b7d356c43aa088eea69464fe210a8090 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:04:00 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v84: 162 pgs: 31 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:04:00 compute-0 sudo[87107]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:00 compute-0 sudo[87157]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyfheojcnvkxgqfqxxqrgquslfrynzrh ; /usr/bin/python3'
Dec 09 12:04:00 compute-0 sudo[87157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 09 12:04:00 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 09 12:04:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Dec 09 12:04:00 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Dec 09 12:04:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:00 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec 09 12:04:00 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 09 12:04:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e27 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec 09 12:04:00 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:00 compute-0 ceph-mon[74388]: 3.1e scrub starts
Dec 09 12:04:00 compute-0 ceph-mon[74388]: 3.1e scrub ok
Dec 09 12:04:00 compute-0 ceph-mon[74388]: from='osd.2 [v2:192.168.122.102:6800/3205527513,v1:192.168.122.102:6801/3205527513]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 09 12:04:00 compute-0 ceph-mon[74388]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 09 12:04:00 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:00 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:00 compute-0 ceph-mon[74388]: 2.2 scrub starts
Dec 09 12:04:00 compute-0 ceph-mon[74388]: 2.2 scrub ok
Dec 09 12:04:00 compute-0 python3[87159]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:00 compute-0 podman[87160]: 2025-12-09 12:04:00.744796222 +0000 UTC m=+0.036493090 container create 477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 09 12:04:00 compute-0 systemd[1]: Started libpod-conmon-477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c.scope.
Dec 09 12:04:00 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Dec 09 12:04:00 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Dec 09 12:04:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb04d89a816f387dfe3cfb30e44cc7ebdf1fc9c3acf5f79fd2f5653e21af79d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb04d89a816f387dfe3cfb30e44cc7ebdf1fc9c3acf5f79fd2f5653e21af79d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb04d89a816f387dfe3cfb30e44cc7ebdf1fc9c3acf5f79fd2f5653e21af79d4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:00 compute-0 podman[87160]: 2025-12-09 12:04:00.818914337 +0000 UTC m=+0.110611205 container init 477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:00 compute-0 podman[87160]: 2025-12-09 12:04:00.8257352 +0000 UTC m=+0.117432058 container start 477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:04:00 compute-0 podman[87160]: 2025-12-09 12:04:00.729921553 +0000 UTC m=+0.021618431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:00 compute-0 podman[87160]: 2025-12-09 12:04:00.829102491 +0000 UTC m=+0.120799369 container attach 477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2927394932' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2927394932' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 09 12:04:01 compute-0 serene_robinson[87175]: 
Dec 09 12:04:01 compute-0 serene_robinson[87175]: [global]
Dec 09 12:04:01 compute-0 serene_robinson[87175]:         fsid = 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:01 compute-0 serene_robinson[87175]:         mon_host = 192.168.122.100
Dec 09 12:04:01 compute-0 systemd[1]: libpod-477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c.scope: Deactivated successfully.
Dec 09 12:04:01 compute-0 podman[87160]: 2025-12-09 12:04:01.194388019 +0000 UTC m=+0.486084877 container died 477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c (image=quay.io/ceph/ceph:v19, name=serene_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 09 12:04:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb04d89a816f387dfe3cfb30e44cc7ebdf1fc9c3acf5f79fd2f5653e21af79d4-merged.mount: Deactivated successfully.
Dec 09 12:04:01 compute-0 podman[87160]: 2025-12-09 12:04:01.235446967 +0000 UTC m=+0.527143825 container remove 477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:01 compute-0 systemd[1]: libpod-conmon-477701df9634f4f0ea972ad9604aab7320d36856e2c302d5b348ab3575e72d2c.scope: Deactivated successfully.
Dec 09 12:04:01 compute-0 sudo[87157]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:01 compute-0 sudo[87234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blzcybmfwqoezovbwbglapolwnhomrkz ; /usr/bin/python3'
Dec 09 12:04:01 compute-0 sudo[87234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:01 compute-0 python3[87236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:01 compute-0 podman[87237]: 2025-12-09 12:04:01.622062235 +0000 UTC m=+0.041832785 container create a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18 (image=quay.io/ceph/ceph:v19, name=zen_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:01 compute-0 systemd[1]: Started libpod-conmon-a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18.scope.
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Dec 09 12:04:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Dec 09 12:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4d3965de709b4f42acc7db6b0836dd74b7ffd72a72b0ddae5afaaadaf90fbe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4d3965de709b4f42acc7db6b0836dd74b7ffd72a72b0ddae5afaaadaf90fbe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4d3965de709b4f42acc7db6b0836dd74b7ffd72a72b0ddae5afaaadaf90fbe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:01 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:01 compute-0 podman[87237]: 2025-12-09 12:04:01.60335951 +0000 UTC m=+0.023130080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:01 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3205527513; not ready for session (expect reconnect)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:01 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742877960s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581893921s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=8.726714134s) [] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active pruub 65.565757751s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742877960s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581893921s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=8.726714134s) [] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565757751s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742674828s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581787109s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742674828s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581787109s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.908179283s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 68.747467041s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.908179283s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747467041s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=8.726395607s) [] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active pruub 65.565734863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=8.726395607s) [] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565734863s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742242813s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581634521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742242813s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581634521s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742090225s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581565857s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742090225s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581565857s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741989136s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581497192s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741989136s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581497192s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.0( empty local-lis/les=23/24 n=0 ec=16/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741826057s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581420898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.0( empty local-lis/les=23/24 n=0 ec=16/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741826057s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581420898s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=22/23 n=0 ec=14/14 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=8.725264549s) [] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 active pruub 65.564895630s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=22/23 n=0 ec=14/14 lis/c=22/22 les/c/f=23/23/0 sis=28 pruub=8.725264549s) [] r=-1 lpr=28 pi=[22,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564895630s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742502213s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581771851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741518974s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581283569s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.742502213s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581771851s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.907264709s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 68.747039795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741518974s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581283569s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.907264709s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747039795s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.907455444s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 68.747337341s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.907455444s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747337341s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.906830788s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 68.746772766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.906830788s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746772766s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741827965s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581809998s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741827965s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581809998s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.907119751s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 68.747215271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.907119751s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747215271s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.906658173s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 68.746795654s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741060257s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.581207275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.906658173s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746795654s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.741060257s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581207275s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.740657806s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.580871582s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.740657806s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.580871582s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.734370232s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.574615479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.734370232s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574615479s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.906667709s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 68.746955872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=11.906667709s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746955872s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.734229088s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.574554443s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:01 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 28 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.734229088s) [] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574554443s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:01 compute-0 podman[87237]: 2025-12-09 12:04:01.714102848 +0000 UTC m=+0.133873408 container init a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18 (image=quay.io/ceph/ceph:v19, name=zen_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 09 12:04:01 compute-0 podman[87237]: 2025-12-09 12:04:01.71934902 +0000 UTC m=+0.139119560 container start a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18 (image=quay.io/ceph/ceph:v19, name=zen_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:01 compute-0 podman[87237]: 2025-12-09 12:04:01.722839645 +0000 UTC m=+0.142610205 container attach a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18 (image=quay.io/ceph/ceph:v19, name=zen_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:01 compute-0 ceph-mon[74388]: pgmap v84: 162 pgs: 31 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 09 12:04:01 compute-0 ceph-mon[74388]: osdmap e27: 3 total, 2 up, 3 in
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='osd.2 [v2:192.168.122.102:6800/3205527513,v1:192.168.122.102:6801/3205527513]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 09 12:04:01 compute-0 ceph-mon[74388]: 3.1b deep-scrub starts
Dec 09 12:04:01 compute-0 ceph-mon[74388]: 3.1b deep-scrub ok
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2927394932' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/2927394932' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 09 12:04:01 compute-0 ceph-mon[74388]: 2.5 scrub starts
Dec 09 12:04:01 compute-0 ceph-mon[74388]: 2.5 scrub ok
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 09 12:04:01 compute-0 ceph-mon[74388]: osdmap e28: 3 total, 2 up, 3 in
Dec 09 12:04:01 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:01 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 09 12:04:01 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 09 12:04:02 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 09 12:04:02 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/491258032' entity='client.admin' 
Dec 09 12:04:02 compute-0 zen_torvalds[87252]: set ssl_option
Dec 09 12:04:02 compute-0 systemd[1]: libpod-a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18.scope: Deactivated successfully.
Dec 09 12:04:02 compute-0 podman[87237]: 2025-12-09 12:04:02.245485601 +0000 UTC m=+0.665256141 container died a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18 (image=quay.io/ceph/ceph:v19, name=zen_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:04:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce4d3965de709b4f42acc7db6b0836dd74b7ffd72a72b0ddae5afaaadaf90fbe-merged.mount: Deactivated successfully.
Dec 09 12:04:02 compute-0 podman[87237]: 2025-12-09 12:04:02.282039791 +0000 UTC m=+0.701810331 container remove a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18 (image=quay.io/ceph/ceph:v19, name=zen_torvalds, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 09 12:04:02 compute-0 systemd[1]: libpod-conmon-a30cff50f1addeea24f49ab493064e858234122fd5ff4505d1cb4a45370d2a18.scope: Deactivated successfully.
Dec 09 12:04:02 compute-0 sudo[87234]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:02 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v87: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:04:02 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 09 12:04:02 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:04:02 compute-0 sudo[87311]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qypdahtykypankwyhnmackcshtanjfzs ; /usr/bin/python3'
Dec 09 12:04:02 compute-0 sudo[87311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:02 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event d4a4317f-54d7-4d2e-870f-8671579c4664 (Global Recovery Event) in 10 seconds
Dec 09 12:04:02 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3205527513; not ready for session (expect reconnect)
Dec 09 12:04:02 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:02 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:02 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:02 compute-0 ceph-mon[74388]: 5.1d scrub starts
Dec 09 12:04:02 compute-0 ceph-mon[74388]: 5.1d scrub ok
Dec 09 12:04:02 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/491258032' entity='client.admin' 
Dec 09 12:04:02 compute-0 ceph-mon[74388]: 2.0 deep-scrub starts
Dec 09 12:04:02 compute-0 ceph-mon[74388]: 2.0 deep-scrub ok
Dec 09 12:04:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 09 12:04:02 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:02 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 09 12:04:02 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 09 12:04:02 compute-0 python3[87313]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:02 compute-0 podman[87314]: 2025-12-09 12:04:02.85896832 +0000 UTC m=+0.046293321 container create e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce (image=quay.io/ceph/ceph:v19, name=exciting_galileo, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:02 compute-0 systemd[1]: Started libpod-conmon-e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce.scope.
Dec 09 12:04:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6b5816d8df4f2ed0ae5b661b708070039d779e616ad51917323ffd5e593f9c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6b5816d8df4f2ed0ae5b661b708070039d779e616ad51917323ffd5e593f9c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6b5816d8df4f2ed0ae5b661b708070039d779e616ad51917323ffd5e593f9c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:02 compute-0 podman[87314]: 2025-12-09 12:04:02.836303296 +0000 UTC m=+0.023628327 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:02 compute-0 podman[87314]: 2025-12-09 12:04:02.938893225 +0000 UTC m=+0.126218236 container init e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce (image=quay.io/ceph/ceph:v19, name=exciting_galileo, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 09 12:04:02 compute-0 podman[87314]: 2025-12-09 12:04:02.944620633 +0000 UTC m=+0.131945624 container start e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce (image=quay.io/ceph/ceph:v19, name=exciting_galileo, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 12:04:02 compute-0 podman[87314]: 2025-12-09 12:04:02.94848651 +0000 UTC m=+0.135811501 container attach e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce (image=quay.io/ceph/ceph:v19, name=exciting_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:03 compute-0 exciting_galileo[87329]: Scheduled rgw.rgw update...
Dec 09 12:04:03 compute-0 exciting_galileo[87329]: Scheduled ingress.rgw.default update...
Dec 09 12:04:03 compute-0 systemd[1]: libpod-e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce.scope: Deactivated successfully.
Dec 09 12:04:03 compute-0 podman[87314]: 2025-12-09 12:04:03.585359248 +0000 UTC m=+0.772684279 container died e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce (image=quay.io/ceph/ceph:v19, name=exciting_galileo, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 12:04:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d6b5816d8df4f2ed0ae5b661b708070039d779e616ad51917323ffd5e593f9c-merged.mount: Deactivated successfully.
Dec 09 12:04:03 compute-0 podman[87314]: 2025-12-09 12:04:03.635551697 +0000 UTC m=+0.822876688 container remove e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce (image=quay.io/ceph/ceph:v19, name=exciting_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 09 12:04:03 compute-0 systemd[1]: libpod-conmon-e92fab8e6b4ff6267cc678ab015c2fa1ee095e68491dc049ef6d001120ebd4ce.scope: Deactivated successfully.
Dec 09 12:04:03 compute-0 sudo[87311]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3205527513; not ready for session (expect reconnect)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134206259: error parsing value: Value '134206259' is below minimum 939524096
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134206259: error parsing value: Value '134206259' is below minimum 939524096
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:04:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:03 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:03 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 09 12:04:03 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 09 12:04:03 compute-0 ceph-mon[74388]: purged_snaps scrub starts
Dec 09 12:04:03 compute-0 ceph-mon[74388]: purged_snaps scrub ok
Dec 09 12:04:03 compute-0 ceph-mon[74388]: pgmap v87: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:04:03 compute-0 ceph-mon[74388]: 4.f scrub starts
Dec 09 12:04:03 compute-0 ceph-mon[74388]: 4.f scrub ok
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 09 12:04:03 compute-0 ceph-mon[74388]: osdmap e29: 3 total, 2 up, 3 in
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mon[74388]: 2.3 scrub starts
Dec 09 12:04:03 compute-0 ceph-mon[74388]: 2.3 scrub ok
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:03 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:04:03 compute-0 sudo[87366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 12:04:03 compute-0 sudo[87366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:03 compute-0 sudo[87366]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:03 compute-0 sudo[87395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph
Dec 09 12:04:03 compute-0 sudo[87395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:03 compute-0 sudo[87395]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:03 compute-0 sudo[87447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:03 compute-0 sudo[87447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:03 compute-0 sudo[87447]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:03 compute-0 sudo[87493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:03 compute-0 sudo[87493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:03 compute-0 sudo[87493]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 sudo[87542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:04 compute-0 sudo[87542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87542]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 python3[87539]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 12:04:04 compute-0 sudo[87590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:04 compute-0 sudo[87590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87590]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 sudo[87633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:04 compute-0 sudo[87633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410598755s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747406006s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410553932s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747406006s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410925865s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747840881s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.19( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410893440s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747840881s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410512924s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747558594s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410512924s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747558594s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410231590s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747367859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410231590s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747367859s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410259247s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747505188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410384178s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747634888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.d( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410241127s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747505188s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410384178s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747634888s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.7( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410239220s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747657776s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.7( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410221100s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747657776s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 sudo[87633]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410249710s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747772217s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.3( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410232544s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747772217s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410205841s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747802734s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410157204s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747779846s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.2( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410187721s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747802734s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.5( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410140991s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747779846s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410148621s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747871399s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410135269s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747871399s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412717819s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.750541687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412721634s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.750572205s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.8( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412685394s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750541687s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410022736s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.747894287s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.15( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412703514s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750572205s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.a( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.410004616s) [0] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747894287s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412606239s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.750564575s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412606239s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750564575s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412719727s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.750984192s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412719727s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750984192s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412560463s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active pruub 68.750968933s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:04 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 29 pg[6.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=29 pruub=9.412560463s) [] r=-1 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750968933s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:04 compute-0 sudo[87669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 09 12:04:04 compute-0 sudo[87669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87669]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:04 compute-0 sudo[87712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:04 compute-0 sudo[87712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87712]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v89: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:04 compute-0 sudo[87761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:04 compute-0 sudo[87761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87761]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:04 compute-0 sudo[87786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:04 compute-0 sudo[87786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87786]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 python3[87758]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765281843.8356004-37249-102269416280815/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:04:04 compute-0 sudo[87811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:04 compute-0 sudo[87811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87811]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 sudo[87836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:04 compute-0 sudo[87836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87836]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 sudo[87908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:04 compute-0 sudo[87908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87908]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3205527513; not ready for session (expect reconnect)
Dec 09 12:04:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:04 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:04 compute-0 sudo[87933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:04 compute-0 sudo[87933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87933]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 09 12:04:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 09 12:04:04 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 09 12:04:04 compute-0 ceph-mon[74388]: from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:04 compute-0 ceph-mon[74388]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:04 compute-0 ceph-mon[74388]: Saving service ingress.rgw.default spec with placement count:2
Dec 09 12:04:04 compute-0 ceph-mon[74388]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec 09 12:04:04 compute-0 ceph-mon[74388]: Unable to set osd_memory_target on compute-2 to 134206259: error parsing value: Value '134206259' is below minimum 939524096
Dec 09 12:04:04 compute-0 ceph-mon[74388]: Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:04 compute-0 ceph-mon[74388]: Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:04 compute-0 ceph-mon[74388]: Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:04 compute-0 ceph-mon[74388]: 3.4 scrub starts
Dec 09 12:04:04 compute-0 ceph-mon[74388]: 3.4 scrub ok
Dec 09 12:04:04 compute-0 ceph-mon[74388]: 2.11 scrub starts
Dec 09 12:04:04 compute-0 ceph-mon[74388]: 2.11 scrub ok
Dec 09 12:04:04 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:04 compute-0 sudo[87958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:04 compute-0 sudo[87958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:04 compute-0 sudo[87958]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Dec 09 12:04:04 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Dec 09 12:04:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:04 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:04 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 sudo[88006]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnxgwrouohziyknrvxialhvkxudqstob ; /usr/bin/python3'
Dec 09 12:04:05 compute-0 sudo[88006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:05 compute-0 python3[88008]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:04:05 compute-0 podman[88009]: 2025-12-09 12:04:05.263647139 +0000 UTC m=+0.081235219 container create 38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef (image=quay.io/ceph/ceph:v19, name=sweet_greider, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:05 compute-0 podman[88009]: 2025-12-09 12:04:05.205646575 +0000 UTC m=+0.023234665 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:05 compute-0 systemd[1]: Started libpod-conmon-38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef.scope.
Dec 09 12:04:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:05 compute-0 sudo[88024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427cafea5dca06a536a4741f04ff2eeafdd3d1c0eb370f431df188d48f04fbaa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427cafea5dca06a536a4741f04ff2eeafdd3d1c0eb370f431df188d48f04fbaa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427cafea5dca06a536a4741f04ff2eeafdd3d1c0eb370f431df188d48f04fbaa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:05 compute-0 sudo[88024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:05 compute-0 sudo[88024]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:05 compute-0 podman[88009]: 2025-12-09 12:04:05.369883109 +0000 UTC m=+0.187471209 container init 38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef (image=quay.io/ceph/ceph:v19, name=sweet_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 09 12:04:05 compute-0 podman[88009]: 2025-12-09 12:04:05.376413983 +0000 UTC m=+0.194002053 container start 38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef (image=quay.io/ceph/ceph:v19, name=sweet_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 12:04:05 compute-0 podman[88009]: 2025-12-09 12:04:05.380341992 +0000 UTC m=+0.197930092 container attach 38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef (image=quay.io/ceph/ceph:v19, name=sweet_greider, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 12:04:05 compute-0 sudo[88052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 09 12:04:05 compute-0 sudo[88052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3205527513; not ready for session (expect reconnect)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 09 12:04:05 compute-0 podman[88137]: 2025-12-09 12:04:05.774016932 +0000 UTC m=+0.041273057 container create 421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ardinghelli, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:05 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 09 12:04:05 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec 09 12:04:05 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec 09 12:04:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 09 12:04:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 sweet_greider[88029]: Scheduled node-exporter update...
Dec 09 12:04:05 compute-0 sweet_greider[88029]: Scheduled grafana update...
Dec 09 12:04:05 compute-0 sweet_greider[88029]: Scheduled prometheus update...
Dec 09 12:04:05 compute-0 sweet_greider[88029]: Scheduled alertmanager update...
Dec 09 12:04:05 compute-0 systemd[1]: Started libpod-conmon-421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7.scope.
Dec 09 12:04:05 compute-0 ceph-mon[74388]: Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:05 compute-0 ceph-mon[74388]: pgmap v89: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:04:05 compute-0 ceph-mon[74388]: Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:05 compute-0 ceph-mon[74388]: Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:05 compute-0 ceph-mon[74388]: 4.4 scrub starts
Dec 09 12:04:05 compute-0 ceph-mon[74388]: 4.4 scrub ok
Dec 09 12:04:05 compute-0 ceph-mon[74388]: osdmap e30: 3 total, 2 up, 3 in
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: 2.14 deep-scrub starts
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: 2.14 deep-scrub ok
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:05 compute-0 podman[88009]: 2025-12-09 12:04:05.828018505 +0000 UTC m=+0.645606605 container died 38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef (image=quay.io/ceph/ceph:v19, name=sweet_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 09 12:04:05 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:05 compute-0 systemd[1]: libpod-38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef.scope: Deactivated successfully.
Dec 09 12:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-427cafea5dca06a536a4741f04ff2eeafdd3d1c0eb370f431df188d48f04fbaa-merged.mount: Deactivated successfully.
Dec 09 12:04:05 compute-0 podman[88137]: 2025-12-09 12:04:05.755755072 +0000 UTC m=+0.023011217 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:04:05 compute-0 podman[88137]: 2025-12-09 12:04:05.854147724 +0000 UTC m=+0.121403849 container init 421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ardinghelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:05 compute-0 podman[88137]: 2025-12-09 12:04:05.86042066 +0000 UTC m=+0.127676785 container start 421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ardinghelli, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:05 compute-0 podman[88137]: 2025-12-09 12:04:05.863367147 +0000 UTC m=+0.130623302 container attach 421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 12:04:05 compute-0 sweet_ardinghelli[88155]: 167 167
Dec 09 12:04:05 compute-0 systemd[1]: libpod-421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7.scope: Deactivated successfully.
Dec 09 12:04:05 compute-0 conmon[88155]: conmon 421caf536ed84e4cdb63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7.scope/container/memory.events
Dec 09 12:04:05 compute-0 podman[88009]: 2025-12-09 12:04:05.866785299 +0000 UTC m=+0.684373369 container remove 38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef (image=quay.io/ceph/ceph:v19, name=sweet_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:04:05 compute-0 podman[88137]: 2025-12-09 12:04:05.867631776 +0000 UTC m=+0.134887901 container died 421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:05 compute-0 systemd[1]: libpod-conmon-38b654153a2ddb5405ba3641ffa63f4ed053e25f1a1df64ee4daa6d03b7486ef.scope: Deactivated successfully.
Dec 09 12:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d51fc11fa32b5b988be548df97d3cd6283ed72c7e3545f77067525f4a0f4d84d-merged.mount: Deactivated successfully.
Dec 09 12:04:05 compute-0 sudo[88006]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:05 compute-0 podman[88137]: 2025-12-09 12:04:05.905967196 +0000 UTC m=+0.173223321 container remove 421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ardinghelli, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:05 compute-0 systemd[1]: libpod-conmon-421caf536ed84e4cdb6314c2ceaee5692511ad92a782cba4d09632b506b4dec7.scope: Deactivated successfully.
Dec 09 12:04:06 compute-0 podman[88189]: 2025-12-09 12:04:06.062857479 +0000 UTC m=+0.037936557 container create 38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lovelace, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:06 compute-0 systemd[1]: Started libpod-conmon-38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff.scope.
Dec 09 12:04:06 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c802810ca9d2f0b1b122d03767045e8d4aa3fed89e1ab596fc3d7908c05ae3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c802810ca9d2f0b1b122d03767045e8d4aa3fed89e1ab596fc3d7908c05ae3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c802810ca9d2f0b1b122d03767045e8d4aa3fed89e1ab596fc3d7908c05ae3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c802810ca9d2f0b1b122d03767045e8d4aa3fed89e1ab596fc3d7908c05ae3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c802810ca9d2f0b1b122d03767045e8d4aa3fed89e1ab596fc3d7908c05ae3aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:06 compute-0 podman[88189]: 2025-12-09 12:04:06.135198225 +0000 UTC m=+0.110277323 container init 38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lovelace, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 09 12:04:06 compute-0 podman[88189]: 2025-12-09 12:04:06.046834212 +0000 UTC m=+0.021913320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:04:06 compute-0 podman[88189]: 2025-12-09 12:04:06.145064099 +0000 UTC m=+0.120143177 container start 38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lovelace, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 12:04:06 compute-0 podman[88189]: 2025-12-09 12:04:06.149945369 +0000 UTC m=+0.125024467 container attach 38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 12:04:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 09 12:04:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 09 12:04:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/3205527513,v1:192.168.122.102:6801/3205527513] boot
Dec 09 12:04:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 09 12:04:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:06 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.350499630s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747367859s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.1b( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.350453854s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747367859s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.350617886s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747558594s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.1e( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.350585461s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747558594s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=4.168659210s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565757751s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[3.1b( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=4.168643475s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565757751s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.184651375s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581787109s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.184641361s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581787109s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.1b( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.350204468s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747467041s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.184526920s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581771851s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.1b( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.350186825s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747467041s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.184484005s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581771851s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.184544086s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581893921s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.184171677s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581634521s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.184159279s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581634521s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.350059509s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747634888s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.1( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.349995136s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747634888s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.183852196s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581565857s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.183834553s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581565857s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.183743477s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581497192s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.183726788s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581497192s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.184371948s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581893921s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.0( empty local-lis/les=23/24 n=0 ec=16/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.183495998s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581420898s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.0( empty local-lis/les=23/24 n=0 ec=16/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.183369160s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581420898s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=22/23 n=0 ec=14/14 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=4.166809559s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564895630s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=22/23 n=0 ec=14/14 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=4.166789055s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.564895630s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=4.167790890s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565734863s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.a( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.349112988s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747337341s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.a( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.349099159s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747337341s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.183557510s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581809998s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.183543205s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581809998s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.c( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348433018s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746772766s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.182825089s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581207275s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.c( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348377705s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746772766s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.182810307s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581207275s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[3.8( empty local-lis/les=22/23 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=4.167510986s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.565734863s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.d( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348541737s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747039795s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.d( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348499775s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747039795s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.10( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348664284s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747215271s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.10( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348649502s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.747215271s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.351933956s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750564575s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.17( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.351922035s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750564575s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.13( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348116875s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746795654s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.13( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348054409s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746795654s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.182068348s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.580871582s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.15( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348135948s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746955872s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.182055950s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.580871582s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[2.15( empty local-lis/les=25/26 n=0 ec=20/13 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.348124504s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.746955872s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.182571411s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581283569s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.175633430s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574554443s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.175619602s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574554443s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.352023125s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750984192s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.12( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.352011681s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750984192s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.175545216s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574615479s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.351884842s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750968933s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.175521851s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.574615479s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[6.1c( empty local-lis/les=25/26 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=31 pruub=7.351869106s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.750968933s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 31 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=5.182259083s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.581283569s@ mbc={}] state<Start>: transitioning to Stray
Dec 09 12:04:06 compute-0 sudo[88233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdxfvjivsneerfgfksgrkswspbqhfdrh ; /usr/bin/python3'
Dec 09 12:04:06 compute-0 sudo[88233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:06 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v92: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:04:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:06 compute-0 python3[88235]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:06 compute-0 awesome_lovelace[88205]: --> passed data devices: 0 physical, 1 LVM
Dec 09 12:04:06 compute-0 awesome_lovelace[88205]: --> All data devices are unavailable
Dec 09 12:04:06 compute-0 podman[88244]: 2025-12-09 12:04:06.49369621 +0000 UTC m=+0.044345328 container create bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee (image=quay.io/ceph/ceph:v19, name=adoring_wu, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 09 12:04:06 compute-0 systemd[1]: libpod-38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff.scope: Deactivated successfully.
Dec 09 12:04:06 compute-0 podman[88189]: 2025-12-09 12:04:06.498946942 +0000 UTC m=+0.474026040 container died 38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 12:04:06 compute-0 systemd[1]: Started libpod-conmon-bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee.scope.
Dec 09 12:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c802810ca9d2f0b1b122d03767045e8d4aa3fed89e1ab596fc3d7908c05ae3aa-merged.mount: Deactivated successfully.
Dec 09 12:04:06 compute-0 podman[88189]: 2025-12-09 12:04:06.54549078 +0000 UTC m=+0.520569858 container remove 38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_lovelace, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:06 compute-0 systemd[1]: libpod-conmon-38f779ecfa8e57fee73323d8d6df001d27d096b669f435425dd13775814844ff.scope: Deactivated successfully.
Dec 09 12:04:06 compute-0 podman[88244]: 2025-12-09 12:04:06.473982382 +0000 UTC m=+0.024631520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:06 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb20f83ab912cc2332ce8bd62c348d729096610296991acb4071fa7675607b6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb20f83ab912cc2332ce8bd62c348d729096610296991acb4071fa7675607b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb20f83ab912cc2332ce8bd62c348d729096610296991acb4071fa7675607b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:06 compute-0 podman[88244]: 2025-12-09 12:04:06.591948217 +0000 UTC m=+0.142597365 container init bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee (image=quay.io/ceph/ceph:v19, name=adoring_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 09 12:04:06 compute-0 sudo[88052]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:06 compute-0 podman[88244]: 2025-12-09 12:04:06.599431742 +0000 UTC m=+0.150080850 container start bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee (image=quay.io/ceph/ceph:v19, name=adoring_wu, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:06 compute-0 podman[88244]: 2025-12-09 12:04:06.604269181 +0000 UTC m=+0.154918319 container attach bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee (image=quay.io/ceph/ceph:v19, name=adoring_wu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:06 compute-0 sudo[88276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:06 compute-0 sudo[88276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:06 compute-0 sudo[88276]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:06 compute-0 sudo[88301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- lvm list --format json
Dec 09 12:04:06 compute-0 sudo[88301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:06 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 09 12:04:06 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 09 12:04:06 compute-0 ceph-mon[74388]: OSD bench result of 6812.279381 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 09 12:04:06 compute-0 ceph-mon[74388]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:06 compute-0 ceph-mon[74388]: Saving service node-exporter spec with placement *
Dec 09 12:04:06 compute-0 ceph-mon[74388]: Saving service grafana spec with placement compute-0;count:1
Dec 09 12:04:06 compute-0 ceph-mon[74388]: 5.5 scrub starts
Dec 09 12:04:06 compute-0 ceph-mon[74388]: Saving service prometheus spec with placement compute-0;count:1
Dec 09 12:04:06 compute-0 ceph-mon[74388]: 5.5 scrub ok
Dec 09 12:04:06 compute-0 ceph-mon[74388]: Saving service alertmanager spec with placement compute-0;count:1
Dec 09 12:04:06 compute-0 ceph-mon[74388]: osd.2 [v2:192.168.122.102:6800/3205527513,v1:192.168.122.102:6801/3205527513] boot
Dec 09 12:04:06 compute-0 ceph-mon[74388]: osdmap e31: 3 total, 3 up, 3 in
Dec 09 12:04:06 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:06 compute-0 ceph-mon[74388]: 2.16 scrub starts
Dec 09 12:04:06 compute-0 ceph-mon[74388]: 2.16 scrub ok
Dec 09 12:04:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec 09 12:04:06 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1338093795' entity='client.admin' 
Dec 09 12:04:07 compute-0 systemd[1]: libpod-bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee.scope: Deactivated successfully.
Dec 09 12:04:07 compute-0 podman[88244]: 2025-12-09 12:04:07.009998856 +0000 UTC m=+0.560647974 container died bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee (image=quay.io/ceph/ceph:v19, name=adoring_wu, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb20f83ab912cc2332ce8bd62c348d729096610296991acb4071fa7675607b6-merged.mount: Deactivated successfully.
Dec 09 12:04:07 compute-0 podman[88244]: 2025-12-09 12:04:07.069021295 +0000 UTC m=+0.619670413 container remove bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee (image=quay.io/ceph/ceph:v19, name=adoring_wu, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 09 12:04:07 compute-0 systemd[1]: libpod-conmon-bd0719140a77543b8f276bbad95414c3b5cec7d1fcab86f37bd08095ec98dbee.scope: Deactivated successfully.
Dec 09 12:04:07 compute-0 sudo[88233]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:07 compute-0 podman[88397]: 2025-12-09 12:04:07.222476265 +0000 UTC m=+0.044609836 container create e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:07 compute-0 systemd[1]: Started libpod-conmon-e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798.scope.
Dec 09 12:04:07 compute-0 sudo[88433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yraaaafytwnbzpqkqtmrdbnfgzllxnuj ; /usr/bin/python3'
Dec 09 12:04:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 09 12:04:07 compute-0 sudo[88433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 09 12:04:07 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 09 12:04:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:07 compute-0 podman[88397]: 2025-12-09 12:04:07.202103547 +0000 UTC m=+0.024237158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:04:07 compute-0 podman[88397]: 2025-12-09 12:04:07.31305116 +0000 UTC m=+0.135184761 container init e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Dec 09 12:04:07 compute-0 podman[88397]: 2025-12-09 12:04:07.318801749 +0000 UTC m=+0.140935330 container start e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 12:04:07 compute-0 podman[88397]: 2025-12-09 12:04:07.322304764 +0000 UTC m=+0.144438345 container attach e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 09 12:04:07 compute-0 peaceful_franklin[88437]: 167 167
Dec 09 12:04:07 compute-0 systemd[1]: libpod-e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798.scope: Deactivated successfully.
Dec 09 12:04:07 compute-0 podman[88397]: 2025-12-09 12:04:07.325486279 +0000 UTC m=+0.147619860 container died e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-595404d93a0034762b607fc4574be53910263d13a567e481d83baee78ecbd9c0-merged.mount: Deactivated successfully.
Dec 09 12:04:07 compute-0 podman[88397]: 2025-12-09 12:04:07.363152896 +0000 UTC m=+0.185286477 container remove e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 12:04:07 compute-0 systemd[1]: libpod-conmon-e8d7f912f4f30cf1facb1890b2cb946627da0e8d0fdbef9bde3b6b6ab5de2798.scope: Deactivated successfully.
Dec 09 12:04:07 compute-0 python3[88439]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:07 compute-0 podman[88455]: 2025-12-09 12:04:07.46376578 +0000 UTC m=+0.042925760 container create f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e (image=quay.io/ceph/ceph:v19, name=bold_noyce, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 12:04:07 compute-0 systemd[1]: Started libpod-conmon-f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e.scope.
Dec 09 12:04:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82335f6e8e286cecba540c0f4caa57195fbd5a8381c9c7b50195e463401de855/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82335f6e8e286cecba540c0f4caa57195fbd5a8381c9c7b50195e463401de855/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82335f6e8e286cecba540c0f4caa57195fbd5a8381c9c7b50195e463401de855/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:07 compute-0 podman[88474]: 2025-12-09 12:04:07.525236719 +0000 UTC m=+0.042597030 container create 8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:07 compute-0 podman[88455]: 2025-12-09 12:04:07.540393667 +0000 UTC m=+0.119553667 container init f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e (image=quay.io/ceph/ceph:v19, name=bold_noyce, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:07 compute-0 podman[88455]: 2025-12-09 12:04:07.445614624 +0000 UTC m=+0.024774634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:07 compute-0 podman[88455]: 2025-12-09 12:04:07.549973711 +0000 UTC m=+0.129133691 container start f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e (image=quay.io/ceph/ceph:v19, name=bold_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:04:07 compute-0 podman[88455]: 2025-12-09 12:04:07.553219748 +0000 UTC m=+0.132379728 container attach f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e (image=quay.io/ceph/ceph:v19, name=bold_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 12:04:07 compute-0 systemd[1]: Started libpod-conmon-8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8.scope.
Dec 09 12:04:07 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed8dcaa298b0c501b28f1d32793250a3b16a830ee7044df429cab491ddeb3ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed8dcaa298b0c501b28f1d32793250a3b16a830ee7044df429cab491ddeb3ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed8dcaa298b0c501b28f1d32793250a3b16a830ee7044df429cab491ddeb3ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed8dcaa298b0c501b28f1d32793250a3b16a830ee7044df429cab491ddeb3ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:07 compute-0 podman[88474]: 2025-12-09 12:04:07.506061789 +0000 UTC m=+0.023422110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:04:07 compute-0 podman[88474]: 2025-12-09 12:04:07.604529614 +0000 UTC m=+0.121889965 container init 8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 09 12:04:07 compute-0 podman[88474]: 2025-12-09 12:04:07.612211496 +0000 UTC m=+0.129571817 container start 8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:07 compute-0 podman[88474]: 2025-12-09 12:04:07.61509736 +0000 UTC m=+0.132457701 container attach 8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_edison, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:07 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 11 completed events
Dec 09 12:04:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:04:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:07 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 09 12:04:07 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 09 12:04:07 compute-0 ceph-mon[74388]: pgmap v92: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 09 12:04:07 compute-0 ceph-mon[74388]: 3.2 scrub starts
Dec 09 12:04:07 compute-0 ceph-mon[74388]: 3.2 scrub ok
Dec 09 12:04:07 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1338093795' entity='client.admin' 
Dec 09 12:04:07 compute-0 ceph-mon[74388]: 2.17 deep-scrub starts
Dec 09 12:04:07 compute-0 ceph-mon[74388]: 2.17 deep-scrub ok
Dec 09 12:04:07 compute-0 ceph-mon[74388]: osdmap e32: 3 total, 3 up, 3 in
Dec 09 12:04:07 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:07 compute-0 hungry_edison[88496]: {
Dec 09 12:04:07 compute-0 hungry_edison[88496]:     "1": [
Dec 09 12:04:07 compute-0 hungry_edison[88496]:         {
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "devices": [
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "/dev/loop3"
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             ],
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "lv_name": "ceph_lv0",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "lv_size": "21470642176",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=750b57e3-924f-51a5-ab09-01517535f732,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0cb4756c-1cb3-414f-a66b-4ca287023452,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "lv_uuid": "NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "name": "ceph_lv0",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "tags": {
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.block_uuid": "NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.cluster_fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.cluster_name": "ceph",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.crush_device_class": "",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.encrypted": "0",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.osd_fsid": "0cb4756c-1cb3-414f-a66b-4ca287023452",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.osd_id": "1",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.type": "block",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.vdo": "0",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:                 "ceph.with_tpm": "0"
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             },
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "type": "block",
Dec 09 12:04:07 compute-0 hungry_edison[88496]:             "vg_name": "ceph_vg0"
Dec 09 12:04:07 compute-0 hungry_edison[88496]:         }
Dec 09 12:04:07 compute-0 hungry_edison[88496]:     ]
Dec 09 12:04:07 compute-0 hungry_edison[88496]: }
Dec 09 12:04:07 compute-0 systemd[1]: libpod-8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8.scope: Deactivated successfully.
Dec 09 12:04:07 compute-0 podman[88474]: 2025-12-09 12:04:07.942055329 +0000 UTC m=+0.459415660 container died 8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec 09 12:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ed8dcaa298b0c501b28f1d32793250a3b16a830ee7044df429cab491ddeb3ba-merged.mount: Deactivated successfully.
Dec 09 12:04:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1945426698' entity='client.admin' 
Dec 09 12:04:07 compute-0 podman[88474]: 2025-12-09 12:04:07.98196368 +0000 UTC m=+0.499324001 container remove 8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 09 12:04:07 compute-0 systemd[1]: libpod-f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e.scope: Deactivated successfully.
Dec 09 12:04:07 compute-0 conmon[88486]: conmon f58e2fc49ad2d17fb808 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e.scope/container/memory.events
Dec 09 12:04:07 compute-0 podman[88455]: 2025-12-09 12:04:07.990990587 +0000 UTC m=+0.570150587 container died f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e (image=quay.io/ceph/ceph:v19, name=bold_noyce, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:07 compute-0 systemd[1]: libpod-conmon-8390160eb80c72f059e819a9f4f51c6939343fbbc8a65681cde8b48be59428d8.scope: Deactivated successfully.
Dec 09 12:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-82335f6e8e286cecba540c0f4caa57195fbd5a8381c9c7b50195e463401de855-merged.mount: Deactivated successfully.
Dec 09 12:04:08 compute-0 podman[88455]: 2025-12-09 12:04:08.032227611 +0000 UTC m=+0.611387591 container remove f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e (image=quay.io/ceph/ceph:v19, name=bold_noyce, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:08 compute-0 sudo[88301]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:08 compute-0 systemd[1]: libpod-conmon-f58e2fc49ad2d17fb808863c5fa9408f7d1f3e9ef020e74d02c00681dde5142e.scope: Deactivated successfully.
Dec 09 12:04:08 compute-0 sudo[88433]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:08 compute-0 sudo[88548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:08 compute-0 sudo[88548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:08 compute-0 sudo[88548]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:08 compute-0 sudo[88573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- raw list --format json
Dec 09 12:04:08 compute-0 sudo[88573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:08 compute-0 sudo[88621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdgfzjqgqmcillhtdkfuomcdtskampb ; /usr/bin/python3'
Dec 09 12:04:08 compute-0 sudo[88621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:08 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v94: 162 pgs: 50 peering, 112 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:08 compute-0 python3[88623]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:08 compute-0 podman[88631]: 2025-12-09 12:04:08.447532681 +0000 UTC m=+0.056856618 container create 53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d (image=quay.io/ceph/ceph:v19, name=busy_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 09 12:04:08 compute-0 systemd[1]: Started libpod-conmon-53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d.scope.
Dec 09 12:04:08 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad394d7e459167c4956fc5979a05d0d7417e9bf79f720d9dd28039773468c855/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad394d7e459167c4956fc5979a05d0d7417e9bf79f720d9dd28039773468c855/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad394d7e459167c4956fc5979a05d0d7417e9bf79f720d9dd28039773468c855/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:08 compute-0 podman[88631]: 2025-12-09 12:04:08.418126145 +0000 UTC m=+0.027450102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:08 compute-0 podman[88631]: 2025-12-09 12:04:08.526277868 +0000 UTC m=+0.135601825 container init 53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d (image=quay.io/ceph/ceph:v19, name=busy_cannon, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:08 compute-0 podman[88631]: 2025-12-09 12:04:08.533849726 +0000 UTC m=+0.143173663 container start 53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d (image=quay.io/ceph/ceph:v19, name=busy_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:04:08 compute-0 podman[88631]: 2025-12-09 12:04:08.537694283 +0000 UTC m=+0.147018250 container attach 53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d (image=quay.io/ceph/ceph:v19, name=busy_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 09 12:04:08 compute-0 podman[88683]: 2025-12-09 12:04:08.602842482 +0000 UTC m=+0.036999115 container create 1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hodgkin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 09 12:04:08 compute-0 systemd[1]: Started libpod-conmon-1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f.scope.
Dec 09 12:04:08 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:08 compute-0 podman[88683]: 2025-12-09 12:04:08.677637399 +0000 UTC m=+0.111794052 container init 1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:08 compute-0 podman[88683]: 2025-12-09 12:04:08.587714466 +0000 UTC m=+0.021871119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:04:08 compute-0 podman[88683]: 2025-12-09 12:04:08.683316676 +0000 UTC m=+0.117473309 container start 1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hodgkin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:08 compute-0 podman[88683]: 2025-12-09 12:04:08.686418797 +0000 UTC m=+0.120575430 container attach 1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hodgkin, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Dec 09 12:04:08 compute-0 distracted_hodgkin[88703]: 167 167
Dec 09 12:04:08 compute-0 systemd[1]: libpod-1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f.scope: Deactivated successfully.
Dec 09 12:04:08 compute-0 podman[88683]: 2025-12-09 12:04:08.687287326 +0000 UTC m=+0.121443979 container died 1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hodgkin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd7364817eb0a4850b38e8dcfa82763e23387453c34df533018ff2c59b1a82c6-merged.mount: Deactivated successfully.
Dec 09 12:04:08 compute-0 podman[88683]: 2025-12-09 12:04:08.719469303 +0000 UTC m=+0.153625936 container remove 1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hodgkin, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 09 12:04:08 compute-0 systemd[1]: libpod-conmon-1c63e623ead8b67e780e7b7854db75738a507abacd3dab432d215aa7bde9571f.scope: Deactivated successfully.
Dec 09 12:04:08 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 09 12:04:08 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 09 12:04:08 compute-0 ceph-mon[74388]: 3.1 scrub starts
Dec 09 12:04:08 compute-0 ceph-mon[74388]: 3.1 scrub ok
Dec 09 12:04:08 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1945426698' entity='client.admin' 
Dec 09 12:04:08 compute-0 ceph-mon[74388]: 2.1a scrub starts
Dec 09 12:04:08 compute-0 ceph-mon[74388]: 2.1a scrub ok
Dec 09 12:04:08 compute-0 podman[88741]: 2025-12-09 12:04:08.864799676 +0000 UTC m=+0.038285079 container create b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 09 12:04:08 compute-0 systemd[1]: Started libpod-conmon-b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33.scope.
Dec 09 12:04:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec 09 12:04:08 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933215134' entity='client.admin' 
Dec 09 12:04:08 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18dd79ff02396e9a7ed1bc6e8e9e82f15ee5ae8f3f6a435cbc5068eba749144a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18dd79ff02396e9a7ed1bc6e8e9e82f15ee5ae8f3f6a435cbc5068eba749144a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18dd79ff02396e9a7ed1bc6e8e9e82f15ee5ae8f3f6a435cbc5068eba749144a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18dd79ff02396e9a7ed1bc6e8e9e82f15ee5ae8f3f6a435cbc5068eba749144a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:08 compute-0 systemd[1]: libpod-53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d.scope: Deactivated successfully.
Dec 09 12:04:08 compute-0 podman[88741]: 2025-12-09 12:04:08.846822725 +0000 UTC m=+0.020308148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:04:08 compute-0 podman[88741]: 2025-12-09 12:04:08.945523757 +0000 UTC m=+0.119009190 container init b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:08 compute-0 podman[88631]: 2025-12-09 12:04:08.946091536 +0000 UTC m=+0.555415473 container died 53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d (image=quay.io/ceph/ceph:v19, name=busy_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 09 12:04:08 compute-0 podman[88741]: 2025-12-09 12:04:08.951743762 +0000 UTC m=+0.125229165 container start b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:08 compute-0 podman[88741]: 2025-12-09 12:04:08.972824694 +0000 UTC m=+0.146310107 container attach b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad394d7e459167c4956fc5979a05d0d7417e9bf79f720d9dd28039773468c855-merged.mount: Deactivated successfully.
Dec 09 12:04:09 compute-0 podman[88631]: 2025-12-09 12:04:09.00742182 +0000 UTC m=+0.616745757 container remove 53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d (image=quay.io/ceph/ceph:v19, name=busy_cannon, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:04:09 compute-0 systemd[1]: libpod-conmon-53dbeaaff7cf66799da2a5ebfbc1c29349ecd7d36596a3fb9dbbb38fd62b0d4d.scope: Deactivated successfully.
Dec 09 12:04:09 compute-0 sudo[88621]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:09 compute-0 sudo[88866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfemfeiizpmykcdyimqnjdbxxbybclko ; /usr/bin/python3'
Dec 09 12:04:09 compute-0 sudo[88866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:09 compute-0 lvm[88873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:04:09 compute-0 lvm[88873]: VG ceph_vg0 finished
Dec 09 12:04:09 compute-0 brave_banzai[88758]: {}
Dec 09 12:04:09 compute-0 systemd[1]: libpod-b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33.scope: Deactivated successfully.
Dec 09 12:04:09 compute-0 systemd[1]: libpod-b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33.scope: Consumed 1.110s CPU time.
Dec 09 12:04:09 compute-0 podman[88741]: 2025-12-09 12:04:09.689058578 +0000 UTC m=+0.862543981 container died b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:09 compute-0 python3[88871]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-18dd79ff02396e9a7ed1bc6e8e9e82f15ee5ae8f3f6a435cbc5068eba749144a-merged.mount: Deactivated successfully.
Dec 09 12:04:09 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 09 12:04:09 compute-0 podman[88741]: 2025-12-09 12:04:09.747161887 +0000 UTC m=+0.920647280 container remove b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:09 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 09 12:04:09 compute-0 systemd[1]: libpod-conmon-b3859f5c511bc7d3c37f1ee4a8fa2ebdf85cd0e4cd34f89b615c64de12a2ad33.scope: Deactivated successfully.
Dec 09 12:04:09 compute-0 sudo[88866]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:09 compute-0 sudo[88573]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:09 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 84ff1362-609f-4460-8587-d56a883d98a9 (Updating rgw.rgw deployment (+3 -> 3))
Dec 09 12:04:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mjhisb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 09 12:04:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mjhisb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:04:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mjhisb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:04:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 09 12:04:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:04:09 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:09 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.mjhisb on compute-2
Dec 09 12:04:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.mjhisb on compute-2
Dec 09 12:04:09 compute-0 ceph-mon[74388]: 2.18 deep-scrub starts
Dec 09 12:04:09 compute-0 ceph-mon[74388]: 2.18 deep-scrub ok
Dec 09 12:04:09 compute-0 ceph-mon[74388]: pgmap v94: 162 pgs: 50 peering, 112 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:09 compute-0 ceph-mon[74388]: 5.3 scrub starts
Dec 09 12:04:09 compute-0 ceph-mon[74388]: 5.3 scrub ok
Dec 09 12:04:09 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3933215134' entity='client.admin' 
Dec 09 12:04:09 compute-0 ceph-mon[74388]: 5.10 deep-scrub starts
Dec 09 12:04:09 compute-0 ceph-mon[74388]: 5.10 deep-scrub ok
Dec 09 12:04:09 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:09 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:09 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mjhisb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:04:09 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mjhisb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:04:10 compute-0 sudo[88923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdyisumnyveioizvmemtzvkybbqfnfb ; /usr/bin/python3'
Dec 09 12:04:10 compute-0 sudo[88923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:10 compute-0 python3[88925]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.wfxreg/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:10 compute-0 podman[88926]: 2025-12-09 12:04:10.314719397 +0000 UTC m=+0.048918308 container create ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f (image=quay.io/ceph/ceph:v19, name=relaxed_pare, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:10 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v95: 162 pgs: 50 peering, 112 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:10 compute-0 systemd[1]: Started libpod-conmon-ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f.scope.
Dec 09 12:04:10 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af954af5dc9a4495f110cdd46692b3930a7fe025daf3850a063a786677db3e20/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af954af5dc9a4495f110cdd46692b3930a7fe025daf3850a063a786677db3e20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af954af5dc9a4495f110cdd46692b3930a7fe025daf3850a063a786677db3e20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:10 compute-0 podman[88926]: 2025-12-09 12:04:10.292489187 +0000 UTC m=+0.026688128 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:10 compute-0 podman[88926]: 2025-12-09 12:04:10.398257411 +0000 UTC m=+0.132456342 container init ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f (image=quay.io/ceph/ceph:v19, name=relaxed_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:04:10 compute-0 podman[88926]: 2025-12-09 12:04:10.427231413 +0000 UTC m=+0.161430324 container start ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f (image=quay.io/ceph/ceph:v19, name=relaxed_pare, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:10 compute-0 podman[88926]: 2025-12-09 12:04:10.430874322 +0000 UTC m=+0.165073263 container attach ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f (image=quay.io/ceph/ceph:v19, name=relaxed_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 09 12:04:10 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 09 12:04:10 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 09 12:04:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.wfxreg/server_addr}] v 0)
Dec 09 12:04:10 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/709932697' entity='client.admin' 
Dec 09 12:04:10 compute-0 systemd[1]: libpod-ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f.scope: Deactivated successfully.
Dec 09 12:04:10 compute-0 podman[88926]: 2025-12-09 12:04:10.800937866 +0000 UTC m=+0.535136777 container died ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f (image=quay.io/ceph/ceph:v19, name=relaxed_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:04:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-af954af5dc9a4495f110cdd46692b3930a7fe025daf3850a063a786677db3e20-merged.mount: Deactivated successfully.
Dec 09 12:04:10 compute-0 podman[88926]: 2025-12-09 12:04:10.839871855 +0000 UTC m=+0.574070766 container remove ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f (image=quay.io/ceph/ceph:v19, name=relaxed_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:10 compute-0 systemd[1]: libpod-conmon-ad28f06ba8a9658389e5bcf6d0c9beb1647dd11d0a3a27915e05a9c13e5f2e1f.scope: Deactivated successfully.
Dec 09 12:04:10 compute-0 sudo[88923]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:10 compute-0 ceph-mon[74388]: 2.12 scrub starts
Dec 09 12:04:10 compute-0 ceph-mon[74388]: 2.12 scrub ok
Dec 09 12:04:10 compute-0 ceph-mon[74388]: 3.6 scrub starts
Dec 09 12:04:10 compute-0 ceph-mon[74388]: 3.6 scrub ok
Dec 09 12:04:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:10 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:10 compute-0 ceph-mon[74388]: Deploying daemon rgw.rgw.compute-2.mjhisb on compute-2
Dec 09 12:04:10 compute-0 ceph-mon[74388]: 4.13 scrub starts
Dec 09 12:04:10 compute-0 ceph-mon[74388]: 4.13 scrub ok
Dec 09 12:04:10 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/709932697' entity='client.admin' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 12:04:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mhnafh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 09 12:04:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mhnafh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:04:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mhnafh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:04:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 09 12:04:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:04:11 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:11 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.mhnafh on compute-1
Dec 09 12:04:11 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.mhnafh on compute-1
Dec 09 12:04:11 compute-0 sudo[89002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpjizpdkdxztcilxeeffidyvsnhlphxe ; /usr/bin/python3'
Dec 09 12:04:11 compute-0 sudo[89002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:11 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 09 12:04:11 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 09 12:04:11 compute-0 python3[89004]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.lorvly/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:11 compute-0 podman[89005]: 2025-12-09 12:04:11.8572586 +0000 UTC m=+0.038613539 container create a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb (image=quay.io/ceph/ceph:v19, name=frosty_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 09 12:04:11 compute-0 systemd[1]: Started libpod-conmon-a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb.scope.
Dec 09 12:04:11 compute-0 ceph-mon[74388]: 3.e scrub starts
Dec 09 12:04:11 compute-0 ceph-mon[74388]: 3.e scrub ok
Dec 09 12:04:11 compute-0 ceph-mon[74388]: pgmap v95: 162 pgs: 50 peering, 112 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:11 compute-0 ceph-mon[74388]: 3.7 scrub starts
Dec 09 12:04:11 compute-0 ceph-mon[74388]: 3.7 scrub ok
Dec 09 12:04:11 compute-0 ceph-mon[74388]: 5.11 scrub starts
Dec 09 12:04:11 compute-0 ceph-mon[74388]: 5.11 scrub ok
Dec 09 12:04:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mhnafh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:04:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mhnafh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:04:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:11 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:11 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee0112af8be5c6dc1367c909d2a7177d939203528290c8f36e7e04c163abbac4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee0112af8be5c6dc1367c909d2a7177d939203528290c8f36e7e04c163abbac4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee0112af8be5c6dc1367c909d2a7177d939203528290c8f36e7e04c163abbac4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:11 compute-0 podman[89005]: 2025-12-09 12:04:11.933306608 +0000 UTC m=+0.114661547 container init a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb (image=quay.io/ceph/ceph:v19, name=frosty_wu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:11 compute-0 podman[89005]: 2025-12-09 12:04:11.841237064 +0000 UTC m=+0.022592023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:11 compute-0 podman[89005]: 2025-12-09 12:04:11.939818221 +0000 UTC m=+0.121173160 container start a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb (image=quay.io/ceph/ceph:v19, name=frosty_wu, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:11 compute-0 podman[89005]: 2025-12-09 12:04:11.943050578 +0000 UTC m=+0.124405537 container attach a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb (image=quay.io/ceph/ceph:v19, name=frosty_wu, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.lorvly/server_addr}] v 0)
Dec 09 12:04:12 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/375967125' entity='client.admin' 
Dec 09 12:04:12 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v96: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:12 compute-0 systemd[1]: libpod-a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb.scope: Deactivated successfully.
Dec 09 12:04:12 compute-0 podman[89045]: 2025-12-09 12:04:12.371380936 +0000 UTC m=+0.020097402 container died a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb (image=quay.io/ceph/ceph:v19, name=frosty_wu, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 12:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee0112af8be5c6dc1367c909d2a7177d939203528290c8f36e7e04c163abbac4-merged.mount: Deactivated successfully.
Dec 09 12:04:12 compute-0 podman[89045]: 2025-12-09 12:04:12.406994055 +0000 UTC m=+0.055710511 container remove a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb (image=quay.io/ceph/ceph:v19, name=frosty_wu, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:12 compute-0 systemd[1]: libpod-conmon-a69d951e6e1d12954c338781af1eed738162dc6c4d2404d96d540a37e68972eb.scope: Deactivated successfully.
Dec 09 12:04:12 compute-0 sudo[89002]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 09 12:04:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 09 12:04:12 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 09 12:04:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 09 12:04:12 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 09 12:04:12 compute-0 ceph-mgr[74679]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec 09 12:04:12 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Dec 09 12:04:12 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:04:12 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Dec 09 12:04:12 compute-0 ceph-mon[74388]: 2.f scrub starts
Dec 09 12:04:12 compute-0 ceph-mon[74388]: 2.f scrub ok
Dec 09 12:04:12 compute-0 ceph-mon[74388]: Deploying daemon rgw.rgw.compute-1.mhnafh on compute-1
Dec 09 12:04:12 compute-0 ceph-mon[74388]: 4.0 scrub starts
Dec 09 12:04:12 compute-0 ceph-mon[74388]: 4.0 scrub ok
Dec 09 12:04:12 compute-0 ceph-mon[74388]: 3.14 scrub starts
Dec 09 12:04:12 compute-0 ceph-mon[74388]: 3.14 scrub ok
Dec 09 12:04:12 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/375967125' entity='client.admin' 
Dec 09 12:04:12 compute-0 ceph-mon[74388]: osdmap e33: 3 total, 3 up, 3 in
Dec 09 12:04:12 compute-0 ceph-mon[74388]: from='client.? 192.168.122.102:0/3239855522' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 09 12:04:12 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 09 12:04:12 compute-0 ceph-mon[74388]: 4.7 deep-scrub starts
Dec 09 12:04:12 compute-0 ceph-mon[74388]: 4.7 deep-scrub ok
Dec 09 12:04:13 compute-0 sudo[89083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hucvuotlupknnirzucriucxrvidytvtz ; /usr/bin/python3'
Dec 09 12:04:13 compute-0 sudo[89083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:13 compute-0 python3[89085]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.hvlbot/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:13 compute-0 podman[89086]: 2025-12-09 12:04:13.364685709 +0000 UTC m=+0.039172477 container create fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01 (image=quay.io/ceph/ceph:v19, name=priceless_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:13 compute-0 systemd[1]: Started libpod-conmon-fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01.scope.
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:13 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tyqqak", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tyqqak", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44810620b3b93b6f509fc98f4d7f90a61f65152d80761307ce96e1212efb1e4d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44810620b3b93b6f509fc98f4d7f90a61f65152d80761307ce96e1212efb1e4d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44810620b3b93b6f509fc98f4d7f90a61f65152d80761307ce96e1212efb1e4d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tyqqak", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 09 12:04:13 compute-0 podman[89086]: 2025-12-09 12:04:13.437581103 +0000 UTC m=+0.112067891 container init fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01 (image=quay.io/ceph/ceph:v19, name=priceless_cannon, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:13 compute-0 podman[89086]: 2025-12-09 12:04:13.346562994 +0000 UTC m=+0.021049782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:13 compute-0 podman[89086]: 2025-12-09 12:04:13.445452583 +0000 UTC m=+0.119939351 container start fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01 (image=quay.io/ceph/ceph:v19, name=priceless_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:13 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.tyqqak on compute-0
Dec 09 12:04:13 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.tyqqak on compute-0
Dec 09 12:04:13 compute-0 podman[89086]: 2025-12-09 12:04:13.448487392 +0000 UTC m=+0.122974180 container attach fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01 (image=quay.io/ceph/ceph:v19, name=priceless_cannon, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 09 12:04:13 compute-0 sudo[89105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:13 compute-0 sudo[89105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:13 compute-0 sudo[89105]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:13 compute-0 sudo[89130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:13 compute-0 sudo[89130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 09 12:04:13 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:04:13 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 09 12:04:13 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 09 12:04:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.hvlbot/server_addr}] v 0)
Dec 09 12:04:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/479889217' entity='client.admin' 
Dec 09 12:04:13 compute-0 systemd[1]: libpod-fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01.scope: Deactivated successfully.
Dec 09 12:04:13 compute-0 ceph-mon[74388]: 5.e scrub starts
Dec 09 12:04:13 compute-0 ceph-mon[74388]: 5.e scrub ok
Dec 09 12:04:13 compute-0 ceph-mon[74388]: pgmap v96: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:13 compute-0 ceph-mon[74388]: 3.13 scrub starts
Dec 09 12:04:13 compute-0 ceph-mon[74388]: 3.13 scrub ok
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tyqqak", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tyqqak", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 09 12:04:13 compute-0 ceph-mon[74388]: osdmap e34: 3 total, 3 up, 3 in
Dec 09 12:04:13 compute-0 ceph-mon[74388]: 5.6 scrub starts
Dec 09 12:04:13 compute-0 ceph-mon[74388]: 5.6 scrub ok
Dec 09 12:04:13 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/479889217' entity='client.admin' 
Dec 09 12:04:13 compute-0 podman[89218]: 2025-12-09 12:04:13.921092825 +0000 UTC m=+0.026953427 container died fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01 (image=quay.io/ceph/ceph:v19, name=priceless_cannon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 09 12:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-44810620b3b93b6f509fc98f4d7f90a61f65152d80761307ce96e1212efb1e4d-merged.mount: Deactivated successfully.
Dec 09 12:04:13 compute-0 podman[89218]: 2025-12-09 12:04:13.957987386 +0000 UTC m=+0.063847978 container remove fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01 (image=quay.io/ceph/ceph:v19, name=priceless_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:13 compute-0 systemd[1]: libpod-conmon-fea381031de874aee71e8fdf3b1fca82bf1ce05f90aa7be14df8bbbc8a37eb01.scope: Deactivated successfully.
Dec 09 12:04:13 compute-0 podman[89224]: 2025-12-09 12:04:13.966801855 +0000 UTC m=+0.057895742 container create 0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_beaver, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:13 compute-0 sudo[89083]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:14 compute-0 systemd[1]: Started libpod-conmon-0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767.scope.
Dec 09 12:04:14 compute-0 podman[89224]: 2025-12-09 12:04:13.9359026 +0000 UTC m=+0.026996517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:04:14 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:14 compute-0 podman[89224]: 2025-12-09 12:04:14.048111816 +0000 UTC m=+0.139205733 container init 0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_beaver, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:14 compute-0 podman[89224]: 2025-12-09 12:04:14.053758821 +0000 UTC m=+0.144852708 container start 0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_beaver, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:14 compute-0 heuristic_beaver[89247]: 167 167
Dec 09 12:04:14 compute-0 podman[89224]: 2025-12-09 12:04:14.057441903 +0000 UTC m=+0.148535820 container attach 0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:14 compute-0 podman[89224]: 2025-12-09 12:04:14.058348863 +0000 UTC m=+0.149442750 container died 0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:14 compute-0 systemd[1]: libpod-0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767.scope: Deactivated successfully.
Dec 09 12:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a01b19dd34cecd928a34ac08cf9a581e97aa2d2757f746a65e5997fba1993d5-merged.mount: Deactivated successfully.
Dec 09 12:04:14 compute-0 podman[89224]: 2025-12-09 12:04:14.093395383 +0000 UTC m=+0.184489280 container remove 0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_beaver, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:14 compute-0 systemd[1]: libpod-conmon-0bfe4abfa055f707881e9aeb0841a66ef282bf206682d2f3f81d40eee99ac767.scope: Deactivated successfully.
Dec 09 12:04:14 compute-0 sudo[89285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erinqhqlmpssonhpfizaugeostouuvkj ; /usr/bin/python3'
Dec 09 12:04:14 compute-0 sudo[89285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:14 compute-0 systemd[1]: Reloading.
Dec 09 12:04:14 compute-0 systemd-rc-local-generator[89314]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:04:14 compute-0 systemd-sysv-generator[89319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:04:14 compute-0 python3[89289]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:14 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v99: 163 pgs: 1 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:14 compute-0 podman[89325]: 2025-12-09 12:04:14.329873171 +0000 UTC m=+0.042711974 container create 768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9 (image=quay.io/ceph/ceph:v19, name=gifted_mestorf, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:14 compute-0 podman[89325]: 2025-12-09 12:04:14.312437698 +0000 UTC m=+0.025276521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:14 compute-0 systemd[1]: Started libpod-conmon-768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9.scope.
Dec 09 12:04:14 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17049af348feb3f59ef9843f23c392c55a330e778dbcef1e64849c176e0ad3b7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17049af348feb3f59ef9843f23c392c55a330e778dbcef1e64849c176e0ad3b7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17049af348feb3f59ef9843f23c392c55a330e778dbcef1e64849c176e0ad3b7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:14 compute-0 systemd[1]: Reloading.
Dec 09 12:04:14 compute-0 podman[89325]: 2025-12-09 12:04:14.492151521 +0000 UTC m=+0.204990354 container init 768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9 (image=quay.io/ceph/ceph:v19, name=gifted_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 09 12:04:14 compute-0 podman[89325]: 2025-12-09 12:04:14.501153655 +0000 UTC m=+0.213992458 container start 768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9 (image=quay.io/ceph/ceph:v19, name=gifted_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 09 12:04:14 compute-0 podman[89325]: 2025-12-09 12:04:14.505622183 +0000 UTC m=+0.218461016 container attach 768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9 (image=quay.io/ceph/ceph:v19, name=gifted_mestorf, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:14 compute-0 systemd-sysv-generator[89381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:04:14 compute-0 systemd-rc-local-generator[89378]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:04:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 09 12:04:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 09 12:04:14 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 09 12:04:14 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:04:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 09 12:04:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 09 12:04:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 09 12:04:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 09 12:04:14 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 09 12:04:14 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 09 12:04:14 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.tyqqak for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:04:14 compute-0 ceph-mon[74388]: 4.15 scrub starts
Dec 09 12:04:14 compute-0 ceph-mon[74388]: 4.15 scrub ok
Dec 09 12:04:14 compute-0 ceph-mon[74388]: Deploying daemon rgw.rgw.compute-0.tyqqak on compute-0
Dec 09 12:04:14 compute-0 ceph-mon[74388]: 5.15 scrub starts
Dec 09 12:04:14 compute-0 ceph-mon[74388]: 5.15 scrub ok
Dec 09 12:04:14 compute-0 ceph-mon[74388]: osdmap e35: 3 total, 3 up, 3 in
Dec 09 12:04:14 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 09 12:04:14 compute-0 ceph-mon[74388]: from='client.? 192.168.122.102:0/4123135884' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 09 12:04:14 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 09 12:04:14 compute-0 ceph-mon[74388]: from='client.? 192.168.122.101:0/1132313064' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 09 12:04:14 compute-0 ceph-mon[74388]: 5.c scrub starts
Dec 09 12:04:14 compute-0 ceph-mon[74388]: 5.c scrub ok
Dec 09 12:04:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 09 12:04:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/392509742' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 09 12:04:15 compute-0 podman[89452]: 2025-12-09 12:04:15.000515977 +0000 UTC m=+0.038225756 container create 0cfe46407c2de0c9320868261859b72fabdd841158b949195c76eb8fe18dc979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-rgw-rgw-compute-0-tyqqak, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:04:15 compute-0 podman[89452]: 2025-12-09 12:04:14.982575237 +0000 UTC m=+0.020285026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4638031e27d9dd590ee48af5ba943d87f0364f3d60734e449056651af88f37e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4638031e27d9dd590ee48af5ba943d87f0364f3d60734e449056651af88f37e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4638031e27d9dd590ee48af5ba943d87f0364f3d60734e449056651af88f37e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4638031e27d9dd590ee48af5ba943d87f0364f3d60734e449056651af88f37e8/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.tyqqak supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:15 compute-0 podman[89452]: 2025-12-09 12:04:15.094953598 +0000 UTC m=+0.132663377 container init 0cfe46407c2de0c9320868261859b72fabdd841158b949195c76eb8fe18dc979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-rgw-rgw-compute-0-tyqqak, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:04:15 compute-0 podman[89452]: 2025-12-09 12:04:15.101238305 +0000 UTC m=+0.138948074 container start 0cfe46407c2de0c9320868261859b72fabdd841158b949195c76eb8fe18dc979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-rgw-rgw-compute-0-tyqqak, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 09 12:04:15 compute-0 bash[89452]: 0cfe46407c2de0c9320868261859b72fabdd841158b949195c76eb8fe18dc979
Dec 09 12:04:15 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.tyqqak for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:04:15 compute-0 sudo[89130]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:15 compute-0 radosgw[89472]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 09 12:04:15 compute-0 radosgw[89472]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec 09 12:04:15 compute-0 radosgw[89472]: framework: beast
Dec 09 12:04:15 compute-0 radosgw[89472]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 09 12:04:15 compute-0 radosgw[89472]: init_numa not setting numa affinity
Dec 09 12:04:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:15 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 84ff1362-609f-4460-8587-d56a883d98a9 (Updating rgw.rgw deployment (+3 -> 3))
Dec 09 12:04:15 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 84ff1362-609f-4460-8587-d56a883d98a9 (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Dec 09 12:04:15 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:15 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:15 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev e01a705e-248f-42bf-9600-65699dbbc949 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 09 12:04:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:15 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.rutkbd on compute-0
Dec 09 12:04:15 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.rutkbd on compute-0
Dec 09 12:04:15 compute-0 sudo[90059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:15 compute-0 sudo[90059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:15 compute-0 sudo[90059]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:15 compute-0 sudo[90084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:15 compute-0 sudo[90084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 09 12:04:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 09 12:04:15 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 09 12:04:15 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:04:15 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 09 12:04:15 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 09 12:04:16 compute-0 ceph-mon[74388]: 4.9 scrub starts
Dec 09 12:04:16 compute-0 ceph-mon[74388]: 4.9 scrub ok
Dec 09 12:04:16 compute-0 ceph-mon[74388]: pgmap v99: 163 pgs: 1 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/392509742' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 09 12:04:16 compute-0 ceph-mon[74388]: 3.10 scrub starts
Dec 09 12:04:16 compute-0 ceph-mon[74388]: 3.10 scrub ok
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 09 12:04:16 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 09 12:04:16 compute-0 ceph-mon[74388]: osdmap e36: 3 total, 3 up, 3 in
Dec 09 12:04:16 compute-0 ceph-mon[74388]: 3.b scrub starts
Dec 09 12:04:16 compute-0 ceph-mon[74388]: 3.b scrub ok
Dec 09 12:04:16 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v102: 164 pgs: 164 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 09 12:04:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/392509742' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 09 12:04:16 compute-0 gifted_mestorf[89343]: module 'dashboard' is already disabled
Dec 09 12:04:16 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.wfxreg(active, since 2m), standbys: compute-2.hvlbot, compute-1.lorvly
Dec 09 12:04:16 compute-0 systemd[1]: libpod-768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9.scope: Deactivated successfully.
Dec 09 12:04:16 compute-0 podman[89325]: 2025-12-09 12:04:16.440324716 +0000 UTC m=+2.153163519 container died 768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9 (image=quay.io/ceph/ceph:v19, name=gifted_mestorf, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 09 12:04:16 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 09 12:04:16 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 09 12:04:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 09 12:04:16 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 09 12:04:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 09 12:04:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 09 12:04:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 09 12:04:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 09 12:04:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 09 12:04:16 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 09 12:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-17049af348feb3f59ef9843f23c392c55a330e778dbcef1e64849c176e0ad3b7-merged.mount: Deactivated successfully.
Dec 09 12:04:17 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 12 completed events
Dec 09 12:04:17 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 09 12:04:17 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 09 12:04:17 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:04:17 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 09 12:04:17 compute-0 ceph-mon[74388]: 3.1a scrub starts
Dec 09 12:04:17 compute-0 ceph-mon[74388]: 3.1a scrub ok
Dec 09 12:04:17 compute-0 ceph-mon[74388]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:17 compute-0 ceph-mon[74388]: Deploying daemon haproxy.rgw.default.compute-0.rutkbd on compute-0
Dec 09 12:04:17 compute-0 ceph-mon[74388]: 5.16 scrub starts
Dec 09 12:04:17 compute-0 ceph-mon[74388]: 5.16 scrub ok
Dec 09 12:04:17 compute-0 ceph-mon[74388]: 4.8 scrub starts
Dec 09 12:04:17 compute-0 ceph-mon[74388]: 4.8 scrub ok
Dec 09 12:04:17 compute-0 ceph-mon[74388]: pgmap v102: 164 pgs: 164 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 09 12:04:17 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/392509742' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 09 12:04:17 compute-0 ceph-mon[74388]: mgrmap e11: compute-0.wfxreg(active, since 2m), standbys: compute-2.hvlbot, compute-1.lorvly
Dec 09 12:04:17 compute-0 ceph-mon[74388]: 4.b scrub starts
Dec 09 12:04:17 compute-0 ceph-mon[74388]: 4.b scrub ok
Dec 09 12:04:17 compute-0 ceph-mon[74388]: osdmap e37: 3 total, 3 up, 3 in
Dec 09 12:04:17 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 09 12:04:17 compute-0 ceph-mon[74388]: from='client.? 192.168.122.102:0/4123135884' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 09 12:04:17 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 09 12:04:17 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 09 12:04:17 compute-0 ceph-mon[74388]: from='client.? 192.168.122.101:0/1132313064' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 09 12:04:17 compute-0 podman[89325]: 2025-12-09 12:04:17.932985563 +0000 UTC m=+3.645824366 container remove 768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9 (image=quay.io/ceph/ceph:v19, name=gifted_mestorf, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 09 12:04:17 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 09 12:04:17 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 09 12:04:17 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 09 12:04:17 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 09 12:04:17 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:17 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 09 12:04:17 compute-0 sudo[89285]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:18 compute-0 systemd[1]: libpod-conmon-768a2a14ab1d2a54c6fa02f15e54adb0dac0702f6d9cc702110cba5189dca8f9.scope: Deactivated successfully.
Dec 09 12:04:18 compute-0 sudo[90220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwffekjgapubjkqxwbxtejrdhwkecdp ; /usr/bin/python3'
Dec 09 12:04:18 compute-0 sudo[90220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:18 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v105: 165 pgs: 1 unknown, 164 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 09 12:04:18 compute-0 python3[90222]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:18 compute-0 podman[90223]: 2025-12-09 12:04:18.387205101 +0000 UTC m=+0.025367024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:18 compute-0 podman[90223]: 2025-12-09 12:04:18.492338925 +0000 UTC m=+0.130500828 container create 0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778 (image=quay.io/ceph/ceph:v19, name=agitated_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:18 compute-0 systemd[1]: Started libpod-conmon-0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778.scope.
Dec 09 12:04:18 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b19e405a88bc474c490b8c11f00458edbb18787a0eeb359c7a78534aa257b0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b19e405a88bc474c490b8c11f00458edbb18787a0eeb359c7a78534aa257b0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b19e405a88bc474c490b8c11f00458edbb18787a0eeb359c7a78534aa257b0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:18 compute-0 podman[90223]: 2025-12-09 12:04:18.634939488 +0000 UTC m=+0.273101431 container init 0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778 (image=quay.io/ceph/ceph:v19, name=agitated_bassi, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:18 compute-0 podman[90223]: 2025-12-09 12:04:18.645192665 +0000 UTC m=+0.283354568 container start 0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778 (image=quay.io/ceph/ceph:v19, name=agitated_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:04:18 compute-0 podman[90223]: 2025-12-09 12:04:18.695624651 +0000 UTC m=+0.333786554 container attach 0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778 (image=quay.io/ceph/ceph:v19, name=agitated_bassi, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:18 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 09 12:04:18 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 09 12:04:18 compute-0 ceph-mon[74388]: 3.16 scrub starts
Dec 09 12:04:18 compute-0 ceph-mon[74388]: 3.16 scrub ok
Dec 09 12:04:18 compute-0 ceph-mon[74388]: 4.1 scrub starts
Dec 09 12:04:18 compute-0 ceph-mon[74388]: 4.1 scrub ok
Dec 09 12:04:18 compute-0 ceph-mon[74388]: 5.a scrub starts
Dec 09 12:04:18 compute-0 ceph-mon[74388]: 5.a scrub ok
Dec 09 12:04:18 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 09 12:04:18 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 09 12:04:18 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 09 12:04:18 compute-0 ceph-mon[74388]: from='mgr.14124 192.168.122.100:0/1896270905' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:18 compute-0 ceph-mon[74388]: osdmap e38: 3 total, 3 up, 3 in
Dec 09 12:04:18 compute-0 ceph-mon[74388]: 3.f scrub starts
Dec 09 12:04:18 compute-0 ceph-mon[74388]: 3.f scrub ok
Dec 09 12:04:18 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 09 12:04:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 09 12:04:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3077613506' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 09 12:04:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 09 12:04:19 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 09 12:04:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 09 12:04:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 09 12:04:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 09 12:04:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 09 12:04:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 09 12:04:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 09 12:04:19 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:04:19 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 09 12:04:19 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 09 12:04:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:04:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:04:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:04:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:04:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:04:19 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:04:20 compute-0 ceph-mon[74388]: 2.b deep-scrub starts
Dec 09 12:04:20 compute-0 ceph-mon[74388]: 2.b deep-scrub ok
Dec 09 12:04:20 compute-0 ceph-mon[74388]: pgmap v105: 165 pgs: 1 unknown, 164 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 09 12:04:20 compute-0 ceph-mon[74388]: 4.17 scrub starts
Dec 09 12:04:20 compute-0 ceph-mon[74388]: 4.17 scrub ok
Dec 09 12:04:20 compute-0 ceph-mon[74388]: 5.9 scrub starts
Dec 09 12:04:20 compute-0 ceph-mon[74388]: 5.9 scrub ok
Dec 09 12:04:20 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3077613506' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-mon[74388]: osdmap e39: 3 total, 3 up, 3 in
Dec 09 12:04:20 compute-0 ceph-mon[74388]: from='client.? 192.168.122.102:0/4123135884' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-mon[74388]: from='client.? 192.168.122.101:0/1132313064' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-mon[74388]: 4.16 scrub starts
Dec 09 12:04:20 compute-0 ceph-mon[74388]: 4.16 scrub ok
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3077613506' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  1: '-n'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  2: 'mgr.compute-0.wfxreg'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  3: '-f'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  4: '--setuser'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  5: 'ceph'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  6: '--setgroup'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  7: 'ceph'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  8: '--default-log-to-file=false'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  9: '--default-log-to-journald=true'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr respawn  exe_path /proc/self/exe
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.wfxreg(active, since 2m), standbys: compute-2.hvlbot, compute-1.lorvly
Dec 09 12:04:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 09 12:04:20 compute-0 systemd[1]: libpod-0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 podman[90223]: 2025-12-09 12:04:20.123285081 +0000 UTC m=+1.761446994 container died 0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778 (image=quay.io/ceph/ceph:v19, name=agitated_bassi, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 09 12:04:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 09 12:04:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 09 12:04:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 09 12:04:20 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:04:20 compute-0 sshd-session[75935]: Connection closed by 192.168.122.100 port 57478
Dec 09 12:04:20 compute-0 sshd-session[75993]: Connection closed by 192.168.122.100 port 57498
Dec 09 12:04:20 compute-0 sshd-session[76022]: Connection closed by 192.168.122.100 port 58228
Dec 09 12:04:20 compute-0 sshd-session[75790]: Connection closed by 192.168.122.100 port 57436
Dec 09 12:04:20 compute-0 sshd-session[75789]: Connection closed by 192.168.122.100 port 57428
Dec 09 12:04:20 compute-0 sshd-session[75848]: Connection closed by 192.168.122.100 port 57458
Dec 09 12:04:20 compute-0 sshd-session[75906]: Connection closed by 192.168.122.100 port 57474
Dec 09 12:04:20 compute-0 sshd-session[75964]: Connection closed by 192.168.122.100 port 57492
Dec 09 12:04:20 compute-0 sshd-session[75877]: Connection closed by 192.168.122.100 port 57470
Dec 09 12:04:20 compute-0 sshd-session[75819]: Connection closed by 192.168.122.100 port 57442
Dec 09 12:04:20 compute-0 sshd-session[76078]: Connection closed by 192.168.122.100 port 58256
Dec 09 12:04:20 compute-0 sshd-session[75990]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 sshd-session[76049]: Connection closed by 192.168.122.100 port 58244
Dec 09 12:04:20 compute-0 sshd-session[76019]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 sshd-session[75874]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 sshd-session[76075]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 sshd-session[75961]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 sshd-session[75784]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 31 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 sshd-session[75932]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 sshd-session[75767]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 sshd-session[76046]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 sshd-session[75816]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 sshd-session[75903]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 31.
Dec 09 12:04:20 compute-0 sshd-session[75845]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 25 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 28 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 23 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 32 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 21 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 26 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 33 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 29 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 24 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 27 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Session 30 logged out. Waiting for processes to exit.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 25.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 28.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 32.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 21.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 23.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 26.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 29.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 24.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 27.
Dec 09 12:04:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-87b19e405a88bc474c490b8c11f00458edbb18787a0eeb359c7a78534aa257b0-merged.mount: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd-logind[799]: Removed session 30.
Dec 09 12:04:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ignoring --setuser ceph since I am not root
Dec 09 12:04:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ignoring --setgroup ceph since I am not root
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: pidfile_write: ignore empty --pid-file
Dec 09 12:04:20 compute-0 podman[90223]: 2025-12-09 12:04:20.237199373 +0000 UTC m=+1.875361276 container remove 0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778 (image=quay.io/ceph/ceph:v19, name=agitated_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 09 12:04:20 compute-0 systemd[1]: libpod-conmon-0e42f64c9a5cbec5316e8421012f88a714f88aff9f19dbf26dd41d75529a8778.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'alerts'
Dec 09 12:04:20 compute-0 sudo[90220]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:20 compute-0 podman[90156]: 2025-12-09 12:04:20.280221026 +0000 UTC m=+4.560117016 container create ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802 (image=quay.io/ceph/haproxy:2.3, name=trusting_swartz)
Dec 09 12:04:20 compute-0 systemd[1]: Started libpod-conmon-ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802.scope.
Dec 09 12:04:20 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:20 compute-0 podman[90156]: 2025-12-09 12:04:20.353318377 +0000 UTC m=+4.633214377 container init ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802 (image=quay.io/ceph/haproxy:2.3, name=trusting_swartz)
Dec 09 12:04:20 compute-0 podman[90156]: 2025-12-09 12:04:20.359631664 +0000 UTC m=+4.639527654 container start ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802 (image=quay.io/ceph/haproxy:2.3, name=trusting_swartz)
Dec 09 12:04:20 compute-0 podman[90156]: 2025-12-09 12:04:20.263783946 +0000 UTC m=+4.543679956 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 09 12:04:20 compute-0 podman[90156]: 2025-12-09 12:04:20.363639075 +0000 UTC m=+4.643535095 container attach ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802 (image=quay.io/ceph/haproxy:2.3, name=trusting_swartz)
Dec 09 12:04:20 compute-0 trusting_swartz[90392]: 0 0
Dec 09 12:04:20 compute-0 systemd[1]: libpod-ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 podman[90156]: 2025-12-09 12:04:20.365118754 +0000 UTC m=+4.645014744 container died ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802 (image=quay.io/ceph/haproxy:2.3, name=trusting_swartz)
Dec 09 12:04:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-14b9361e21f7caa0d9ec29fb6d0139f53ef4dbbe864ce9060e482eaf3b70f86a-merged.mount: Deactivated successfully.
Dec 09 12:04:20 compute-0 podman[90156]: 2025-12-09 12:04:20.401735207 +0000 UTC m=+4.681631197 container remove ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802 (image=quay.io/ceph/haproxy:2.3, name=trusting_swartz)
Dec 09 12:04:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:20.406+0000 7ff3b8721140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'balancer'
Dec 09 12:04:20 compute-0 systemd[1]: libpod-conmon-ba8cf08adc0b5aab11099e7401f5b90c14351705fa55a67a0b4843cf6ad0a802.scope: Deactivated successfully.
Dec 09 12:04:20 compute-0 systemd[1]: Reloading.
Dec 09 12:04:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:20.506+0000 7ff3b8721140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:04:20 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'cephadm'
Dec 09 12:04:20 compute-0 systemd-rc-local-generator[90437]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:04:20 compute-0 systemd-sysv-generator[90444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:04:20 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 09 12:04:20 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 09 12:04:20 compute-0 sudo[90473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsmshumjcnvxbapgwuytcybolgqxlakk ; /usr/bin/python3'
Dec 09 12:04:20 compute-0 sudo[90473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:20 compute-0 systemd[1]: Reloading.
Dec 09 12:04:20 compute-0 systemd-sysv-generator[90509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:04:20 compute-0 systemd-rc-local-generator[90504]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:04:20 compute-0 python3[90477]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:20 compute-0 podman[90516]: 2025-12-09 12:04:20.935917892 +0000 UTC m=+0.042189577 container create f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294 (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 09 12:04:21 compute-0 systemd[1]: Started libpod-conmon-f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294.scope.
Dec 09 12:04:21 compute-0 podman[90516]: 2025-12-09 12:04:20.918975855 +0000 UTC m=+0.025247570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:21 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.rutkbd for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:04:21 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7242837137d35fdb58db14e55af21d73ea51cba11eb43483803484fdb80db50d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7242837137d35fdb58db14e55af21d73ea51cba11eb43483803484fdb80db50d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7242837137d35fdb58db14e55af21d73ea51cba11eb43483803484fdb80db50d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:21 compute-0 ceph-mon[74388]: 4.1f deep-scrub starts
Dec 09 12:04:21 compute-0 ceph-mon[74388]: 4.1f deep-scrub ok
Dec 09 12:04:21 compute-0 ceph-mon[74388]: 3.c scrub starts
Dec 09 12:04:21 compute-0 ceph-mon[74388]: 3.c scrub ok
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/3077613506' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 09 12:04:21 compute-0 ceph-mon[74388]: mgrmap e12: compute-0.wfxreg(active, since 2m), standbys: compute-2.hvlbot, compute-1.lorvly
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 09 12:04:21 compute-0 ceph-mon[74388]: osdmap e40: 3 total, 3 up, 3 in
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? 192.168.122.102:0/4123135884' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 09 12:04:21 compute-0 ceph-mon[74388]: from='client.? 192.168.122.101:0/1132313064' entity='client.rgw.rgw.compute-1.mhnafh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 09 12:04:21 compute-0 ceph-mon[74388]: 3.11 scrub starts
Dec 09 12:04:21 compute-0 ceph-mon[74388]: 3.11 scrub ok
Dec 09 12:04:21 compute-0 ceph-mon[74388]: 5.17 scrub starts
Dec 09 12:04:21 compute-0 ceph-mon[74388]: 5.17 scrub ok
Dec 09 12:04:21 compute-0 podman[90516]: 2025-12-09 12:04:21.054491346 +0000 UTC m=+0.160763051 container init f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294 (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:21 compute-0 podman[90516]: 2025-12-09 12:04:21.063845923 +0000 UTC m=+0.170117608 container start f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294 (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:21 compute-0 podman[90516]: 2025-12-09 12:04:21.067813233 +0000 UTC m=+0.174084918 container attach f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294 (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:21 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 09 12:04:21 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 09 12:04:21 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 09 12:04:21 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 09 12:04:21 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 09 12:04:21 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 09 12:04:21 compute-0 podman[90615]: 2025-12-09 12:04:21.303074191 +0000 UTC m=+0.040415709 container create d845d38373399b27c5f961cd5a983c0c22677b6f0a8c8a9ec8bc84c5563a3da9 (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd)
Dec 09 12:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ba46bdfd5659bf555e6362a0044437effbbac6035e9c215be11fd77a294cf13/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:21 compute-0 podman[90615]: 2025-12-09 12:04:21.364028312 +0000 UTC m=+0.101369880 container init d845d38373399b27c5f961cd5a983c0c22677b6f0a8c8a9ec8bc84c5563a3da9 (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd)
Dec 09 12:04:21 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:21 compute-0 podman[90615]: 2025-12-09 12:04:21.374810226 +0000 UTC m=+0.112151764 container start d845d38373399b27c5f961cd5a983c0c22677b6f0a8c8a9ec8bc84c5563a3da9 (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd)
Dec 09 12:04:21 compute-0 podman[90615]: 2025-12-09 12:04:21.282630979 +0000 UTC m=+0.019972527 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 09 12:04:21 compute-0 bash[90615]: d845d38373399b27c5f961cd5a983c0c22677b6f0a8c8a9ec8bc84c5563a3da9
Dec 09 12:04:21 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd[90631]: [NOTICE] 342/120421 (2) : New worker #1 (4) forked
Dec 09 12:04:21 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.rutkbd for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:04:21 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd[90631]: [WARNING] 342/120421 (4) : Server backend/rgw.rgw.compute-0.tyqqak is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 09 12:04:21 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'crash'
Dec 09 12:04:21 compute-0 radosgw[89472]: v1 topic migration: starting v1 topic migration..
Dec 09 12:04:21 compute-0 radosgw[89472]: LDAP not started since no server URIs were provided in the configuration.
Dec 09 12:04:21 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-rgw-rgw-compute-0-tyqqak[89468]: 2025-12-09T12:04:21.395+0000 7fba4776a980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 09 12:04:21 compute-0 radosgw[89472]: v1 topic migration: finished v1 topic migration
Dec 09 12:04:21 compute-0 sudo[90084]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:21 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Dec 09 12:04:21 compute-0 systemd[1]: session-33.scope: Consumed 30.744s CPU time.
Dec 09 12:04:21 compute-0 systemd-logind[799]: Removed session 33.
Dec 09 12:04:21 compute-0 radosgw[89472]: framework: beast
Dec 09 12:04:21 compute-0 radosgw[89472]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 09 12:04:21 compute-0 radosgw[89472]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 09 12:04:21 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:21.490+0000 7ff3b8721140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:04:21 compute-0 ceph-mgr[74679]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:04:21 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'dashboard'
Dec 09 12:04:21 compute-0 radosgw[89472]: starting handler: beast
Dec 09 12:04:21 compute-0 radosgw[89472]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 12:04:21 compute-0 radosgw[89472]: mgrc service_daemon_register rgw.14367 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.tyqqak,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6eaa3a05-5032-4605-9c9e-fb7e73f6c7ea,zone_name=default,zonegroup_id=3b83b7a9-efc4-4948-91a0-85f986eaabc5,zonegroup_name=default}
Dec 09 12:04:21 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 09 12:04:21 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 09 12:04:22 compute-0 ceph-mon[74388]: 4.a deep-scrub starts
Dec 09 12:04:22 compute-0 ceph-mon[74388]: 4.a deep-scrub ok
Dec 09 12:04:22 compute-0 ceph-mon[74388]: 3.1d scrub starts
Dec 09 12:04:22 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/16791070' entity='client.rgw.rgw.compute-0.tyqqak' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 09 12:04:22 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-1.mhnafh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 09 12:04:22 compute-0 ceph-mon[74388]: from='client.? ' entity='client.rgw.rgw.compute-2.mjhisb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 09 12:04:22 compute-0 ceph-mon[74388]: osdmap e41: 3 total, 3 up, 3 in
Dec 09 12:04:22 compute-0 ceph-mon[74388]: 3.1d scrub ok
Dec 09 12:04:22 compute-0 ceph-mon[74388]: 3.12 scrub starts
Dec 09 12:04:22 compute-0 ceph-mon[74388]: 3.12 scrub ok
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'devicehealth'
Dec 09 12:04:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:22.238+0000 7ff3b8721140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'diskprediction_local'
Dec 09 12:04:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 09 12:04:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 09 12:04:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   from numpy import show_config as show_numpy_config
Dec 09 12:04:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:22.451+0000 7ff3b8721140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'influx'
Dec 09 12:04:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:22.532+0000 7ff3b8721140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'insights'
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'iostat'
Dec 09 12:04:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:22.692+0000 7ff3b8721140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:04:22 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'k8sevents'
Dec 09 12:04:22 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 09 12:04:22 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 09 12:04:23 compute-0 ceph-mon[74388]: 3.d scrub starts
Dec 09 12:04:23 compute-0 ceph-mon[74388]: 3.d scrub ok
Dec 09 12:04:23 compute-0 ceph-mon[74388]: 3.9 scrub starts
Dec 09 12:04:23 compute-0 ceph-mon[74388]: 3.9 scrub ok
Dec 09 12:04:23 compute-0 ceph-mon[74388]: 5.14 scrub starts
Dec 09 12:04:23 compute-0 ceph-mon[74388]: 5.14 scrub ok
Dec 09 12:04:23 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'localpool'
Dec 09 12:04:23 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mds_autoscaler'
Dec 09 12:04:23 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:23 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.003000099s ======
Dec 09 12:04:23 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:23.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000099s
Dec 09 12:04:23 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mirroring'
Dec 09 12:04:23 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'nfs'
Dec 09 12:04:23 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 09 12:04:23 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 09 12:04:23 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:23.831+0000 7ff3b8721140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:04:23 compute-0 ceph-mgr[74679]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:04:23 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'orchestrator'
Dec 09 12:04:24 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:24.073+0000 7ff3b8721140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_perf_query'
Dec 09 12:04:24 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:24.162+0000 7ff3b8721140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_support'
Dec 09 12:04:24 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:24.238+0000 7ff3b8721140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'pg_autoscaler'
Dec 09 12:04:24 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:24.336+0000 7ff3b8721140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'progress'
Dec 09 12:04:24 compute-0 ceph-mon[74388]: 5.2 scrub starts
Dec 09 12:04:24 compute-0 ceph-mon[74388]: 5.2 scrub ok
Dec 09 12:04:24 compute-0 ceph-mon[74388]: 5.4 scrub starts
Dec 09 12:04:24 compute-0 ceph-mon[74388]: 5.4 scrub ok
Dec 09 12:04:24 compute-0 ceph-mon[74388]: 4.12 scrub starts
Dec 09 12:04:24 compute-0 ceph-mon[74388]: 4.12 scrub ok
Dec 09 12:04:24 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:24.418+0000 7ff3b8721140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'prometheus'
Dec 09 12:04:24 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 09 12:04:24 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 09 12:04:24 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:24.802+0000 7ff3b8721140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rbd_support'
Dec 09 12:04:24 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:24.917+0000 7ff3b8721140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:04:24 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'restful'
Dec 09 12:04:25 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rgw'
Dec 09 12:04:25 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:25 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd[90631]: [WARNING] 342/120425 (4) : Server backend/rgw.rgw.compute-0.tyqqak is UP, reason: Layer7 check passed, code: 200, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 09 12:04:25 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:04:25 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:04:25 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:25.400+0000 7ff3b8721140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:04:25 compute-0 ceph-mgr[74679]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:04:25 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rook'
Dec 09 12:04:25 compute-0 ceph-mon[74388]: 4.5 scrub starts
Dec 09 12:04:25 compute-0 ceph-mon[74388]: 4.5 scrub ok
Dec 09 12:04:25 compute-0 ceph-mon[74388]: 5.1a scrub starts
Dec 09 12:04:25 compute-0 ceph-mon[74388]: 5.1a scrub ok
Dec 09 12:04:25 compute-0 ceph-mon[74388]: 4.11 scrub starts
Dec 09 12:04:25 compute-0 ceph-mon[74388]: 4.11 scrub ok
Dec 09 12:04:25 compute-0 ceph-mon[74388]: 3.a scrub starts
Dec 09 12:04:25 compute-0 ceph-mon[74388]: 3.a scrub ok
Dec 09 12:04:25 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 09 12:04:25 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 09 12:04:26 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:26.004+0000 7ff3b8721140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'selftest'
Dec 09 12:04:26 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:26.080+0000 7ff3b8721140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'snap_schedule'
Dec 09 12:04:26 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:26.170+0000 7ff3b8721140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'stats'
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'status'
Dec 09 12:04:26 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:26.320+0000 7ff3b8721140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telegraf'
Dec 09 12:04:26 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:26 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:26.394+0000 7ff3b8721140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telemetry'
Dec 09 12:04:26 compute-0 ceph-mon[74388]: 4.3 scrub starts
Dec 09 12:04:26 compute-0 ceph-mon[74388]: 4.3 scrub ok
Dec 09 12:04:26 compute-0 ceph-mon[74388]: 4.10 scrub starts
Dec 09 12:04:26 compute-0 ceph-mon[74388]: 4.10 scrub ok
Dec 09 12:04:26 compute-0 ceph-mon[74388]: 5.7 scrub starts
Dec 09 12:04:26 compute-0 ceph-mon[74388]: 5.7 scrub ok
Dec 09 12:04:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.hvlbot restarted
Dec 09 12:04:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.hvlbot started
Dec 09 12:04:26 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:26.559+0000 7ff3b8721140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'test_orchestrator'
Dec 09 12:04:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.lorvly restarted
Dec 09 12:04:26 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.lorvly started
Dec 09 12:04:26 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 09 12:04:26 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 09 12:04:26 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:26.788+0000 7ff3b8721140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:26 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'volumes'
Dec 09 12:04:27 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:27.147+0000 7ff3b8721140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'zabbix'
Dec 09 12:04:27 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:27.225+0000 7ff3b8721140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wfxreg restarted
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wfxreg
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: ms_deliver_dispatch: unhandled message 0x55bcb4c4f860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr handle_mgr_map Activating!
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.wfxreg(active, starting, since 0.03079s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr handle_mgr_map I am now activating
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e1 all = 1
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: balancer
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [balancer INFO root] Starting
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Manager daemon compute-0.wfxreg is now available
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [balancer INFO root] Optimize plan auto_2025-12-09_12:04:27
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: cephadm
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: crash
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: dashboard
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO sso] Loading SSO DB version=1
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: devicehealth
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: iostat
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: nfs
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [devicehealth INFO root] Starting
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: orchestrator
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: pg_autoscaler
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: progress
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [progress INFO root] Loading...
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7ff33cacf6a0>, <progress.module.GhostEvent object at 0x7ff33cacf8b0>, <progress.module.GhostEvent object at 0x7ff33cacf8e0>, <progress.module.GhostEvent object at 0x7ff33cacf910>, <progress.module.GhostEvent object at 0x7ff33cacf940>, <progress.module.GhostEvent object at 0x7ff33cacf970>, <progress.module.GhostEvent object at 0x7ff33cacf9a0>, <progress.module.GhostEvent object at 0x7ff33cacf9d0>, <progress.module.GhostEvent object at 0x7ff33cacfa00>, <progress.module.GhostEvent object at 0x7ff33cacfa30>, <progress.module.GhostEvent object at 0x7ff33cacfa60>, <progress.module.GhostEvent object at 0x7ff33cacfa90>] historic events
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [progress INFO root] Loaded OSDMap, ready.
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] recovery thread starting
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] starting setup
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: rbd_support
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: restful
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: status
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [restful INFO root] server_addr: :: server_port: 8003
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: telemetry
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [restful WARNING root] server not running: no certificate configured
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] PerfHandler: starting
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: volumes
Dec 09 12:04:27 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:27 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:04:27 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:27.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TaskHandler: starting
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"} v 0)
Dec 09 12:04:27 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [rbd_support INFO root] setup complete
Dec 09 12:04:27 compute-0 ceph-mon[74388]: 4.2 scrub starts
Dec 09 12:04:27 compute-0 ceph-mon[74388]: 4.2 scrub ok
Dec 09 12:04:27 compute-0 ceph-mon[74388]: Standby manager daemon compute-2.hvlbot restarted
Dec 09 12:04:27 compute-0 ceph-mon[74388]: Standby manager daemon compute-2.hvlbot started
Dec 09 12:04:27 compute-0 ceph-mon[74388]: Standby manager daemon compute-1.lorvly restarted
Dec 09 12:04:27 compute-0 ceph-mon[74388]: Standby manager daemon compute-1.lorvly started
Dec 09 12:04:27 compute-0 ceph-mon[74388]: 3.17 scrub starts
Dec 09 12:04:27 compute-0 ceph-mon[74388]: 3.17 scrub ok
Dec 09 12:04:27 compute-0 ceph-mon[74388]: 4.d scrub starts
Dec 09 12:04:27 compute-0 ceph-mon[74388]: 4.d scrub ok
Dec 09 12:04:27 compute-0 ceph-mon[74388]: Active manager daemon compute-0.wfxreg restarted
Dec 09 12:04:27 compute-0 ceph-mon[74388]: Activating manager daemon compute-0.wfxreg
Dec 09 12:04:27 compute-0 ceph-mon[74388]: osdmap e42: 3 total, 3 up, 3 in
Dec 09 12:04:27 compute-0 ceph-mon[74388]: mgrmap e13: compute-0.wfxreg(active, starting, since 0.03079s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: Manager daemon compute-0.wfxreg is now available
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"}]: dispatch
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 09 12:04:27 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 09 12:04:27 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 09 12:04:27 compute-0 sshd-session[90796]: Accepted publickey for ceph-admin from 192.168.122.100 port 55744 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:04:27 compute-0 systemd-logind[799]: New session 34 of user ceph-admin.
Dec 09 12:04:27 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec 09 12:04:27 compute-0 sshd-session[90796]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:04:27 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.module] Engine started.
Dec 09 12:04:27 compute-0 sudo[90812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:27 compute-0 sudo[90812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:27 compute-0 sudo[90812]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:27 compute-0 sudo[90837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 09 12:04:27 compute-0 sudo[90837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:28 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.wfxreg(active, since 1.05165s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:28 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec 09 12:04:28 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v3: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:28 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:28 compute-0 sharp_blackburn[90544]: Option GRAFANA_API_USERNAME updated
Dec 09 12:04:28 compute-0 systemd[1]: libpod-f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294.scope: Deactivated successfully.
Dec 09 12:04:28 compute-0 podman[90516]: 2025-12-09 12:04:28.347735685 +0000 UTC m=+7.454007380 container died f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294 (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7242837137d35fdb58db14e55af21d73ea51cba11eb43483803484fdb80db50d-merged.mount: Deactivated successfully.
Dec 09 12:04:28 compute-0 podman[90516]: 2025-12-09 12:04:28.390739968 +0000 UTC m=+7.497011663 container remove f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294 (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 12:04:28 compute-0 systemd[1]: libpod-conmon-f4be2245c19bf404e2dbe609f6214464db34412491a80553838c8fcb14ebc294.scope: Deactivated successfully.
Dec 09 12:04:28 compute-0 sudo[90473]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:28 compute-0 ceph-mon[74388]: 2.1b scrub starts
Dec 09 12:04:28 compute-0 ceph-mon[74388]: 2.1b scrub ok
Dec 09 12:04:28 compute-0 ceph-mon[74388]: 5.1e scrub starts
Dec 09 12:04:28 compute-0 ceph-mon[74388]: 5.1e scrub ok
Dec 09 12:04:28 compute-0 ceph-mon[74388]: 3.5 scrub starts
Dec 09 12:04:28 compute-0 ceph-mon[74388]: 3.5 scrub ok
Dec 09 12:04:28 compute-0 ceph-mon[74388]: mgrmap e14: compute-0.wfxreg(active, since 1.05165s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:28 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:28 compute-0 podman[90942]: 2025-12-09 12:04:28.526780636 +0000 UTC m=+0.049340462 container exec a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 09 12:04:28 compute-0 sudo[90985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puzpwsgfjsczsgqrotbxfekkgjzlxvvu ; /usr/bin/python3'
Dec 09 12:04:28 compute-0 sudo[90985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:28 compute-0 podman[90942]: 2025-12-09 12:04:28.646581781 +0000 UTC m=+0.169141607 container exec_died a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 12:04:28 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 09 12:04:28 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 09 12:04:28 compute-0 python3[90988]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec 09 12:04:28 compute-0 podman[91024]: 2025-12-09 12:04:28.861842851 +0000 UTC m=+0.051831004 container create 29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc (image=quay.io/ceph/ceph:v19, name=jovial_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 12:04:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:28 compute-0 systemd[1]: Started libpod-conmon-29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc.scope.
Dec 09 12:04:28 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:28 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:28 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:28 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:28 compute-0 podman[91024]: 2025-12-09 12:04:28.834026737 +0000 UTC m=+0.024014920 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748927d2ce945a7da23c5eebcf446115057af72d986cd42b60e64e959c95dad0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748927d2ce945a7da23c5eebcf446115057af72d986cd42b60e64e959c95dad0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748927d2ce945a7da23c5eebcf446115057af72d986cd42b60e64e959c95dad0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:28 compute-0 podman[91024]: 2025-12-09 12:04:28.94465504 +0000 UTC m=+0.134643213 container init 29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc (image=quay.io/ceph/ceph:v19, name=jovial_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 09 12:04:28 compute-0 podman[91024]: 2025-12-09 12:04:28.952790268 +0000 UTC m=+0.142778421 container start 29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc (image=quay.io/ceph/ceph:v19, name=jovial_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:04:28 compute-0 podman[91024]: 2025-12-09 12:04:28.956056955 +0000 UTC m=+0.146045128 container attach 29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc (image=quay.io/ceph/ceph:v19, name=jovial_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 09 12:04:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 podman[91142]: 2025-12-09 12:04:29.232266548 +0000 UTC m=+0.047259645 container exec d845d38373399b27c5f961cd5a983c0c22677b6f0a8c8a9ec8bc84c5563a3da9 (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd)
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:29] ENGINE Bus STARTING
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:29] ENGINE Bus STARTING
Dec 09 12:04:29 compute-0 podman[91142]: 2025-12-09 12:04:29.239132012 +0000 UTC m=+0.054125109 container exec_died d845d38373399b27c5f961cd5a983c0c22677b6f0a8c8a9ec8bc84c5563a3da9 (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd)
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v4: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:29 compute-0 sudo[90837]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:29] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:29] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:04:29 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 jovial_bouman[91063]: Option GRAFANA_API_PASSWORD updated
Dec 09 12:04:29 compute-0 systemd[1]: libpod-29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc.scope: Deactivated successfully.
Dec 09 12:04:29 compute-0 podman[91024]: 2025-12-09 12:04:29.372753181 +0000 UTC m=+0.562741344 container died 29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc (image=quay.io/ceph/ceph:v19, name=jovial_bouman, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 09 12:04:29 compute-0 sudo[91184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:29 compute-0 sudo[91184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:29 compute-0 sudo[91184]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-748927d2ce945a7da23c5eebcf446115057af72d986cd42b60e64e959c95dad0-merged.mount: Deactivated successfully.
Dec 09 12:04:29 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:29 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:04:29 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:29.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: [devicehealth INFO root] Check health
Dec 09 12:04:29 compute-0 podman[91024]: 2025-12-09 12:04:29.417804511 +0000 UTC m=+0.607792664 container remove 29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc (image=quay.io/ceph/ceph:v19, name=jovial_bouman, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:29 compute-0 systemd[1]: libpod-conmon-29a70d2ffefc11f2e27410187de3ea37793ba54a8fdf4f549e50d49a989492dc.scope: Deactivated successfully.
Dec 09 12:04:29 compute-0 sudo[90985]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:29 compute-0 sudo[91243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 09 12:04:29 compute-0 sudo[91243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:29] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:29] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:29] ENGINE Bus STARTED
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:29] ENGINE Bus STARTED
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:29] ENGINE Client ('192.168.122.100', 43790) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:04:29 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:29] ENGINE Client ('192.168.122.100', 43790) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:04:29 compute-0 sudo[91294]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqckrbftlmizhmsmswemhxczntasvpcr ; /usr/bin/python3'
Dec 09 12:04:29 compute-0 sudo[91294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:29 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 09 12:04:29 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 09 12:04:29 compute-0 python3[91302]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:29 compute-0 podman[91312]: 2025-12-09 12:04:29.839985847 +0000 UTC m=+0.027480244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:29 compute-0 sudo[91243]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:29 compute-0 podman[91312]: 2025-12-09 12:04:29.959284905 +0000 UTC m=+0.146779282 container create e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97 (image=quay.io/ceph/ceph:v19, name=determined_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 09 12:04:29 compute-0 ceph-mon[74388]: 4.6 scrub starts
Dec 09 12:04:29 compute-0 ceph-mon[74388]: 4.6 scrub ok
Dec 09 12:04:29 compute-0 ceph-mon[74388]: 4.1e scrub starts
Dec 09 12:04:29 compute-0 ceph-mon[74388]: 4.1e scrub ok
Dec 09 12:04:29 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: 3.3 scrub starts
Dec 09 12:04:29 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: 3.3 scrub ok
Dec 09 12:04:29 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:29] ENGINE Bus STARTING
Dec 09 12:04:29 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:29 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:29 compute-0 systemd[1]: Started libpod-conmon-e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97.scope.
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9275913e3c10e5f2378f8becee9d6f70f8cd9959d097884f307a8c3a7a020edc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9275913e3c10e5f2378f8becee9d6f70f8cd9959d097884f307a8c3a7a020edc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9275913e3c10e5f2378f8becee9d6f70f8cd9959d097884f307a8c3a7a020edc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:30 compute-0 sudo[91341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:30 compute-0 sudo[91341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91341]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 podman[91312]: 2025-12-09 12:04:30.038542039 +0000 UTC m=+0.226036436 container init e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97 (image=quay.io/ceph/ceph:v19, name=determined_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:30 compute-0 podman[91312]: 2025-12-09 12:04:30.044279836 +0000 UTC m=+0.231774213 container start e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97 (image=quay.io/ceph/ceph:v19, name=determined_driscoll, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 09 12:04:30 compute-0 podman[91312]: 2025-12-09 12:04:30.048775745 +0000 UTC m=+0.236270132 container attach e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97 (image=quay.io/ceph/ceph:v19, name=determined_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 09 12:04:30 compute-0 sudo[91369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 09 12:04:30 compute-0 sudo[91369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.wfxreg(active, since 2s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 09 12:04:30 compute-0 sudo[91369]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec 09 12:04:30 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 determined_driscoll[91353]: Option ALERTMANAGER_API_HOST updated
Dec 09 12:04:30 compute-0 sudo[91433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 12:04:30 compute-0 sudo[91433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91433]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 systemd[1]: libpod-e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97.scope: Deactivated successfully.
Dec 09 12:04:30 compute-0 podman[91312]: 2025-12-09 12:04:30.444708848 +0000 UTC m=+0.632203235 container died e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97 (image=quay.io/ceph/ceph:v19, name=determined_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 12:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9275913e3c10e5f2378f8becee9d6f70f8cd9959d097884f307a8c3a7a020edc-merged.mount: Deactivated successfully.
Dec 09 12:04:30 compute-0 podman[91312]: 2025-12-09 12:04:30.48586376 +0000 UTC m=+0.673358137 container remove e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97 (image=quay.io/ceph/ceph:v19, name=determined_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:30 compute-0 systemd[1]: libpod-conmon-e00e99dc5ea407eb76bc74094686ff8aa29736790964c254dda902aad136ba97.scope: Deactivated successfully.
Dec 09 12:04:30 compute-0 sudo[91461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph
Dec 09 12:04:30 compute-0 sudo[91461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91461]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 sudo[91294]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 sudo[91496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:30 compute-0 sudo[91496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91496]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 sudo[91521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:30 compute-0 sudo[91521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91521]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 sudo[91582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpvdgzxhqczppgduijnqhrrcavsjyznf ; /usr/bin/python3'
Dec 09 12:04:30 compute-0 sudo[91582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:30 compute-0 sudo[91559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:30 compute-0 sudo[91559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91559]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 09 12:04:30 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 09 12:04:30 compute-0 python3[91594]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:30 compute-0 sudo[91620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:30 compute-0 sudo[91620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91620]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 podman[91643]: 2025-12-09 12:04:30.834854012 +0000 UTC m=+0.044666178 container create 7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841 (image=quay.io/ceph/ceph:v19, name=loving_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:30 compute-0 sudo[91651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:30 compute-0 sudo[91651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91651]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 systemd[1]: Started libpod-conmon-7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841.scope.
Dec 09 12:04:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:30 compute-0 sudo[91685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 09 12:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d36c62ab091f8abec6f598b336c10b87e6d72cf0bd794e043d18ddee1f9f67/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d36c62ab091f8abec6f598b336c10b87e6d72cf0bd794e043d18ddee1f9f67/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d36c62ab091f8abec6f598b336c10b87e6d72cf0bd794e043d18ddee1f9f67/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:30 compute-0 sudo[91685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 podman[91643]: 2025-12-09 12:04:30.815710234 +0000 UTC m=+0.025522410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:30 compute-0 sudo[91685]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 podman[91643]: 2025-12-09 12:04:30.921139696 +0000 UTC m=+0.130951862 container init 7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841 (image=quay.io/ceph/ceph:v19, name=loving_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:30 compute-0 podman[91643]: 2025-12-09 12:04:30.927810136 +0000 UTC m=+0.137622292 container start 7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841 (image=quay.io/ceph/ceph:v19, name=loving_davinci, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 09 12:04:30 compute-0 podman[91643]: 2025-12-09 12:04:30.931856678 +0000 UTC m=+0.141668864 container attach 7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841 (image=quay.io/ceph/ceph:v19, name=loving_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:30 compute-0 sudo[91714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:30 compute-0 ceph-mon[74388]: 4.1d deep-scrub starts
Dec 09 12:04:30 compute-0 ceph-mon[74388]: 4.1d deep-scrub ok
Dec 09 12:04:30 compute-0 ceph-mon[74388]: pgmap v4: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:29] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:04:30 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:29] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:04:30 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:29] ENGINE Bus STARTED
Dec 09 12:04:30 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:29] ENGINE Client ('192.168.122.100', 43790) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:04:30 compute-0 ceph-mon[74388]: 2.19 scrub starts
Dec 09 12:04:30 compute-0 ceph-mon[74388]: 2.19 scrub ok
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: 5.1 scrub starts
Dec 09 12:04:30 compute-0 ceph-mon[74388]: mgrmap e15: compute-0.wfxreg(active, since 2s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:30 compute-0 ceph-mon[74388]: 5.1 scrub ok
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:04:30 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:30 compute-0 ceph-mon[74388]: 2.1 scrub starts
Dec 09 12:04:30 compute-0 ceph-mon[74388]: 2.1 scrub ok
Dec 09 12:04:30 compute-0 sudo[91714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:30 compute-0 sudo[91714]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:30 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:31 compute-0 sudo[91739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:31 compute-0 sudo[91739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[91739]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 sudo[91783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:31 compute-0 sudo[91783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[91783]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 sudo[91808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:31 compute-0 sudo[91808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[91808]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 sudo[91833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:31 compute-0 sudo[91833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[91833]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v5: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14427 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec 09 12:04:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:31 compute-0 loving_davinci[91698]: Option PROMETHEUS_API_HOST updated
Dec 09 12:04:31 compute-0 sudo[91881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:31 compute-0 sudo[91881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[91881]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 systemd[1]: libpod-7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841.scope: Deactivated successfully.
Dec 09 12:04:31 compute-0 podman[91643]: 2025-12-09 12:04:31.332847578 +0000 UTC m=+0.542659734 container died 7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841 (image=quay.io/ceph/ceph:v19, name=loving_davinci, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d36c62ab091f8abec6f598b336c10b87e6d72cf0bd794e043d18ddee1f9f67-merged.mount: Deactivated successfully.
Dec 09 12:04:31 compute-0 podman[91643]: 2025-12-09 12:04:31.371641452 +0000 UTC m=+0.581453608 container remove 7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841 (image=quay.io/ceph/ceph:v19, name=loving_davinci, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 09 12:04:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:31 compute-0 systemd[1]: libpod-conmon-7d629c2643ef6df4744025c69fd51e602acb2cc5454c76dab8d9f0f0efa60841.scope: Deactivated successfully.
Dec 09 12:04:31 compute-0 sudo[91909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:31 compute-0 sudo[91909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[91909]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 sudo[91582]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:31 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:04:31 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:31.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:04:31 compute-0 sudo[91946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:31 compute-0 sudo[91946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[91946]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:31 compute-0 sudo[91971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 12:04:31 compute-0 sudo[91971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[91971]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:31 compute-0 sudo[92019]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysoadurvdwzasnrujhckcouuxqbcegex ; /usr/bin/python3'
Dec 09 12:04:31 compute-0 sudo[92019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:31 compute-0 sudo[92020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph
Dec 09 12:04:31 compute-0 sudo[92020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[92020]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:31 compute-0 sudo[92047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:04:31 compute-0 sudo[92047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[92047]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 09 12:04:31 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 09 12:04:31 compute-0 sudo[92072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:31 compute-0 sudo[92072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 python3[92033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:31 compute-0 sudo[92072]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 sudo[92098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:04:31 compute-0 sudo[92098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 podman[92097]: 2025-12-09 12:04:31.750161405 +0000 UTC m=+0.038669141 container create 9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767 (image=quay.io/ceph/ceph:v19, name=silly_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 09 12:04:31 compute-0 sudo[92098]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 systemd[1]: Started libpod-conmon-9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767.scope.
Dec 09 12:04:31 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fa5235c4871de4089ad253825e1d61cf9d8c17ef445e7167757528cc12d66b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fa5235c4871de4089ad253825e1d61cf9d8c17ef445e7167757528cc12d66b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fa5235c4871de4089ad253825e1d61cf9d8c17ef445e7167757528cc12d66b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:31 compute-0 podman[92097]: 2025-12-09 12:04:31.820796805 +0000 UTC m=+0.109304571 container init 9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767 (image=quay.io/ceph/ceph:v19, name=silly_margulis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 12:04:31 compute-0 podman[92097]: 2025-12-09 12:04:31.826006106 +0000 UTC m=+0.114513842 container start 9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767 (image=quay.io/ceph/ceph:v19, name=silly_margulis, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 09 12:04:31 compute-0 podman[92097]: 2025-12-09 12:04:31.829317525 +0000 UTC m=+0.117825281 container attach 9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767 (image=quay.io/ceph/ceph:v19, name=silly_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 09 12:04:31 compute-0 podman[92097]: 2025-12-09 12:04:31.734686037 +0000 UTC m=+0.023193803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:31 compute-0 sudo[92163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:04:31 compute-0 sudo[92163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[92163]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 sudo[92189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:04:31 compute-0 sudo[92189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[92189]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 sudo[92224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:31 compute-0 sudo[92224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:31 compute-0 sudo[92224]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.wfxreg(active, since 4s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:32 compute-0 ceph-mon[74388]: 6.1 scrub starts
Dec 09 12:04:32 compute-0 ceph-mon[74388]: 6.1 scrub ok
Dec 09 12:04:32 compute-0 ceph-mon[74388]: Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:32 compute-0 ceph-mon[74388]: Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:32 compute-0 ceph-mon[74388]: Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:32 compute-0 ceph-mon[74388]: from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:32 compute-0 ceph-mon[74388]: Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:32 compute-0 ceph-mon[74388]: Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:32 compute-0 ceph-mon[74388]: Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:32 compute-0 ceph-mon[74388]: 4.e scrub starts
Dec 09 12:04:32 compute-0 ceph-mon[74388]: 4.e scrub ok
Dec 09 12:04:32 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 ceph-mon[74388]: 2.9 scrub starts
Dec 09 12:04:32 compute-0 ceph-mon[74388]: 2.9 scrub ok
Dec 09 12:04:32 compute-0 sudo[92258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:32 compute-0 sudo[92258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92258]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:32 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:32 compute-0 sudo[92283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:32 compute-0 sudo[92283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92283]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 sudo[92308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:04:32 compute-0 sudo[92308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92308]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:32 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 silly_margulis[92155]: Option GRAFANA_API_URL updated
Dec 09 12:04:32 compute-0 systemd[1]: libpod-9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767.scope: Deactivated successfully.
Dec 09 12:04:32 compute-0 podman[92097]: 2025-12-09 12:04:32.222913982 +0000 UTC m=+0.511421718 container died 9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767 (image=quay.io/ceph/ceph:v19, name=silly_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-77fa5235c4871de4089ad253825e1d61cf9d8c17ef445e7167757528cc12d66b-merged.mount: Deactivated successfully.
Dec 09 12:04:32 compute-0 sudo[92334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:32 compute-0 sudo[92334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92334]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 podman[92097]: 2025-12-09 12:04:32.256267787 +0000 UTC m=+0.544775523 container remove 9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767 (image=quay.io/ceph/ceph:v19, name=silly_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:32 compute-0 systemd[1]: libpod-conmon-9d3ab42727970556e2bfa9c06609e1bc05a2d4775c78e2da4c31dd9aa5fd9767.scope: Deactivated successfully.
Dec 09 12:04:32 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:32 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:32 compute-0 sudo[92019]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 sudo[92371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:04:32 compute-0 sudo[92371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92371]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 sudo[92419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:04:32 compute-0 sudo[92419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92419]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 sudo[92444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:04:32 compute-0 sudo[92444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92444]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 sudo[92493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uunkkfyywwamzvynhhwfdwrkeqjyavye ; /usr/bin/python3'
Dec 09 12:04:32 compute-0 sudo[92493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:32 compute-0 sudo[92492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:32 compute-0 sudo[92492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92492]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:32 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:32 compute-0 python3[92502]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:32 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 09 12:04:32 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 09 12:04:32 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:32 compute-0 podman[92520]: 2025-12-09 12:04:32.708824452 +0000 UTC m=+0.040102409 container create f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166 (image=quay.io/ceph/ceph:v19, name=nervous_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 systemd[1]: Started libpod-conmon-f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166.scope.
Dec 09 12:04:32 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc1135ed351354e0cc7ff1c763b185c988f35afd996e545d92e9c9e07e8fea1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc1135ed351354e0cc7ff1c763b185c988f35afd996e545d92e9c9e07e8fea1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc1135ed351354e0cc7ff1c763b185c988f35afd996e545d92e9c9e07e8fea1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:32 compute-0 podman[92520]: 2025-12-09 12:04:32.770365133 +0000 UTC m=+0.101643090 container init f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166 (image=quay.io/ceph/ceph:v19, name=nervous_williamson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 12:04:32 compute-0 podman[92520]: 2025-12-09 12:04:32.775015975 +0000 UTC m=+0.106293932 container start f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166 (image=quay.io/ceph/ceph:v19, name=nervous_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:32 compute-0 podman[92520]: 2025-12-09 12:04:32.779678178 +0000 UTC m=+0.110956155 container attach f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166 (image=quay.io/ceph/ceph:v19, name=nervous_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 09 12:04:32 compute-0 podman[92520]: 2025-12-09 12:04:32.69387616 +0000 UTC m=+0.025154137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:32 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:04:32 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:32 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 6103550d-28aa-4622-9e2d-a357f84fcf7d (Updating node-exporter deployment (+3 -> 3))
Dec 09 12:04:32 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec 09 12:04:32 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec 09 12:04:32 compute-0 sudo[92558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:32 compute-0 sudo[92558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:32 compute-0 sudo[92558]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:33 compute-0 ceph-mon[74388]: pgmap v5: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:33 compute-0 ceph-mon[74388]: 5.0 scrub starts
Dec 09 12:04:33 compute-0 ceph-mon[74388]: 5.0 scrub ok
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='client.14427 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:33 compute-0 ceph-mon[74388]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:33 compute-0 ceph-mon[74388]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:33 compute-0 ceph-mon[74388]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:33 compute-0 ceph-mon[74388]: Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:33 compute-0 ceph-mon[74388]: mgrmap e16: compute-0.wfxreg(active, since 4s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:33 compute-0 ceph-mon[74388]: 4.c scrub starts
Dec 09 12:04:33 compute-0 ceph-mon[74388]: 4.c scrub ok
Dec 09 12:04:33 compute-0 ceph-mon[74388]: Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:33 compute-0 ceph-mon[74388]: 4.1c deep-scrub starts
Dec 09 12:04:33 compute-0 ceph-mon[74388]: 4.1c deep-scrub ok
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:33 compute-0 ceph-mon[74388]: 2.e scrub starts
Dec 09 12:04:33 compute-0 ceph-mon[74388]: 2.e scrub ok
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:33 compute-0 ceph-mon[74388]: from='mgr.14379 192.168.122.100:0/2780204078' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:33 compute-0 sudo[92583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:33 compute-0 sudo[92583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:33 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 09 12:04:33 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1520291103' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 09 12:04:33 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v6: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:33 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:33 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 09 12:04:33 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 09 12:04:33 compute-0 systemd[1]: Reloading.
Dec 09 12:04:33 compute-0 systemd-sysv-generator[92680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:04:33 compute-0 systemd-rc-local-generator[92671]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:04:33 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 09 12:04:33 compute-0 systemd[1]: Reloading.
Dec 09 12:04:33 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 09 12:04:33 compute-0 systemd-rc-local-generator[92718]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:04:33 compute-0 systemd-sysv-generator[92721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:04:33 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:04:34 compute-0 ceph-mon[74388]: Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:34 compute-0 ceph-mon[74388]: Deploying daemon node-exporter.compute-0 on compute-0
Dec 09 12:04:34 compute-0 ceph-mon[74388]: 5.1f scrub starts
Dec 09 12:04:34 compute-0 ceph-mon[74388]: 5.1f scrub ok
Dec 09 12:04:34 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1520291103' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 09 12:04:34 compute-0 ceph-mon[74388]: 3.0 scrub starts
Dec 09 12:04:34 compute-0 ceph-mon[74388]: 3.0 scrub ok
Dec 09 12:04:34 compute-0 ceph-mon[74388]: 2.6 scrub starts
Dec 09 12:04:34 compute-0 ceph-mon[74388]: 2.6 scrub ok
Dec 09 12:04:34 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1520291103' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  1: '-n'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  2: 'mgr.compute-0.wfxreg'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  3: '-f'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  4: '--setuser'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  5: 'ceph'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  6: '--setgroup'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  7: 'ceph'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  8: '--default-log-to-file=false'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  9: '--default-log-to-journald=true'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr respawn  exe_path /proc/self/exe
Dec 09 12:04:34 compute-0 systemd[1]: libpod-f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166.scope: Deactivated successfully.
Dec 09 12:04:34 compute-0 conmon[92535]: conmon f9aa3d7280a7ea01685b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166.scope/container/memory.events
Dec 09 12:04:34 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.wfxreg(active, since 6s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:34 compute-0 podman[92520]: 2025-12-09 12:04:34.098935658 +0000 UTC m=+1.430213615 container died f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166 (image=quay.io/ceph/ceph:v19, name=nervous_williamson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbc1135ed351354e0cc7ff1c763b185c988f35afd996e545d92e9c9e07e8fea1-merged.mount: Deactivated successfully.
Dec 09 12:04:34 compute-0 podman[92520]: 2025-12-09 12:04:34.144519125 +0000 UTC m=+1.475797082 container remove f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166 (image=quay.io/ceph/ceph:v19, name=nervous_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 09 12:04:34 compute-0 systemd[1]: libpod-conmon-f9aa3d7280a7ea01685bbcc5f356c85dfd00e8c9e7e03d6fc150ba40e2b7a166.scope: Deactivated successfully.
Dec 09 12:04:34 compute-0 sshd-session[90810]: Connection closed by 192.168.122.100 port 55744
Dec 09 12:04:34 compute-0 sudo[92493]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:34 compute-0 sshd-session[90796]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 09 12:04:34 compute-0 systemd-logind[799]: Session 34 logged out. Waiting for processes to exit.
Dec 09 12:04:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ignoring --setuser ceph since I am not root
Dec 09 12:04:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ignoring --setgroup ceph since I am not root
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: pidfile_write: ignore empty --pid-file
Dec 09 12:04:34 compute-0 bash[92789]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'alerts'
Dec 09 12:04:34 compute-0 sudo[92837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faymkfpmizyosqqtbjstiyxwejcpbepg ; /usr/bin/python3'
Dec 09 12:04:34 compute-0 sudo[92837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:34.353+0000 7f14874c8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'balancer'
Dec 09 12:04:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:34.445+0000 7f14874c8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:04:34 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'cephadm'
Dec 09 12:04:34 compute-0 python3[92839]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:34 compute-0 podman[92840]: 2025-12-09 12:04:34.579104179 +0000 UTC m=+0.053969984 container create 555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875 (image=quay.io/ceph/ceph:v19, name=confident_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:34 compute-0 systemd[1]: Started libpod-conmon-555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875.scope.
Dec 09 12:04:34 compute-0 podman[92840]: 2025-12-09 12:04:34.55630245 +0000 UTC m=+0.031168275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:34 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f103f176ae67a45f60636441059fa195d57478b4a448f50f04ec18eb2e7d220/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f103f176ae67a45f60636441059fa195d57478b4a448f50f04ec18eb2e7d220/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f103f176ae67a45f60636441059fa195d57478b4a448f50f04ec18eb2e7d220/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:34 compute-0 podman[92840]: 2025-12-09 12:04:34.687765178 +0000 UTC m=+0.162631003 container init 555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875 (image=quay.io/ceph/ceph:v19, name=confident_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:04:34 compute-0 podman[92840]: 2025-12-09 12:04:34.69515447 +0000 UTC m=+0.170020275 container start 555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875 (image=quay.io/ceph/ceph:v19, name=confident_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:34 compute-0 podman[92840]: 2025-12-09 12:04:34.698924834 +0000 UTC m=+0.173790639 container attach 555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875 (image=quay.io/ceph/ceph:v19, name=confident_albattani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:34 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 09 12:04:34 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 09 12:04:34 compute-0 bash[92789]: Getting image source signatures
Dec 09 12:04:34 compute-0 bash[92789]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec 09 12:04:34 compute-0 bash[92789]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec 09 12:04:34 compute-0 bash[92789]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec 09 12:04:35 compute-0 ceph-mon[74388]: 5.f scrub starts
Dec 09 12:04:35 compute-0 ceph-mon[74388]: 5.f scrub ok
Dec 09 12:04:35 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1520291103' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 09 12:04:35 compute-0 ceph-mon[74388]: mgrmap e17: compute-0.wfxreg(active, since 6s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:35 compute-0 ceph-mon[74388]: 6.1e deep-scrub starts
Dec 09 12:04:35 compute-0 ceph-mon[74388]: 6.1e deep-scrub ok
Dec 09 12:04:35 compute-0 ceph-mon[74388]: 2.4 scrub starts
Dec 09 12:04:35 compute-0 ceph-mon[74388]: 2.4 scrub ok
Dec 09 12:04:35 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 09 12:04:35 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1376747930' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 09 12:04:35 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'crash'
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:35.374+0000 7f14874c8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:04:35 compute-0 ceph-mgr[74679]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:04:35 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'dashboard'
Dec 09 12:04:35 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:35 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 09 12:04:35 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 09 12:04:35 compute-0 bash[92789]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec 09 12:04:35 compute-0 bash[92789]: Writing manifest to image destination
Dec 09 12:04:35 compute-0 podman[92789]: 2025-12-09 12:04:35.501661719 +0000 UTC m=+1.288149179 container create 8a80e9c07f8151d01c8e0945e5cbbf405a1c7fd22d214e114f9c95218a689c8b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:04:35 compute-0 podman[92789]: 2025-12-09 12:04:35.48583786 +0000 UTC m=+1.272325340 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 09 12:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d49922acdd4c0bbdf5df49e9542a2e9a220937bcb5dc03b57f7230dc11af7eb/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:35 compute-0 podman[92789]: 2025-12-09 12:04:35.552460938 +0000 UTC m=+1.338948398 container init 8a80e9c07f8151d01c8e0945e5cbbf405a1c7fd22d214e114f9c95218a689c8b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:04:35 compute-0 podman[92789]: 2025-12-09 12:04:35.557175662 +0000 UTC m=+1.343663122 container start 8a80e9c07f8151d01c8e0945e5cbbf405a1c7fd22d214e114f9c95218a689c8b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:04:35 compute-0 bash[92789]: 8a80e9c07f8151d01c8e0945e5cbbf405a1c7fd22d214e114f9c95218a689c8b
Dec 09 12:04:35 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.568Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.568Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.570Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.570Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=arp
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=bcache
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=bonding
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=cpu
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=dmi
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=edac
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=entropy
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=filefd
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=netclass
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=netdev
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=netstat
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=nfs
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=nvme
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=os
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=pressure
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=rapl
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=selinux
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=softnet
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=stat
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=textfile
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=time
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=uname
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=xfs
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.572Z caller=node_exporter.go:117 level=info collector=zfs
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.573Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 09 12:04:35 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0[92952]: ts=2025-12-09T12:04:35.573Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 09 12:04:35 compute-0 sudo[92583]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:35 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 09 12:04:35 compute-0 systemd[1]: session-34.scope: Consumed 5.404s CPU time.
Dec 09 12:04:35 compute-0 systemd-logind[799]: Removed session 34.
Dec 09 12:04:35 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 09 12:04:35 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 09 12:04:35 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'devicehealth'
Dec 09 12:04:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:36.082+0000 7f14874c8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'diskprediction_local'
Dec 09 12:04:36 compute-0 ceph-mon[74388]: 4.1b deep-scrub starts
Dec 09 12:04:36 compute-0 ceph-mon[74388]: 4.1b deep-scrub ok
Dec 09 12:04:36 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1376747930' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 09 12:04:36 compute-0 ceph-mon[74388]: 6.1b scrub starts
Dec 09 12:04:36 compute-0 ceph-mon[74388]: 6.1b scrub ok
Dec 09 12:04:36 compute-0 ceph-mon[74388]: 6.18 scrub starts
Dec 09 12:04:36 compute-0 ceph-mon[74388]: 6.18 scrub ok
Dec 09 12:04:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1376747930' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 09 12:04:36 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.wfxreg(active, since 8s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:36 compute-0 systemd[1]: libpod-555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875.scope: Deactivated successfully.
Dec 09 12:04:36 compute-0 podman[92840]: 2025-12-09 12:04:36.14898165 +0000 UTC m=+1.623847475 container died 555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875 (image=quay.io/ceph/ceph:v19, name=confident_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f103f176ae67a45f60636441059fa195d57478b4a448f50f04ec18eb2e7d220-merged.mount: Deactivated successfully.
Dec 09 12:04:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 09 12:04:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 09 12:04:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   from numpy import show_config as show_numpy_config
Dec 09 12:04:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:36.273+0000 7f14874c8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'influx'
Dec 09 12:04:36 compute-0 podman[92840]: 2025-12-09 12:04:36.298374887 +0000 UTC m=+1.773240692 container remove 555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875 (image=quay.io/ceph/ceph:v19, name=confident_albattani, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 12:04:36 compute-0 systemd[1]: libpod-conmon-555b15e011512850aee68d01d533bf8f373a0ba4dd1d9a4019607f1e7866a875.scope: Deactivated successfully.
Dec 09 12:04:36 compute-0 sudo[92837]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:36.353+0000 7f14874c8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'insights'
Dec 09 12:04:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'iostat'
Dec 09 12:04:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:36.534+0000 7f14874c8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'k8sevents'
Dec 09 12:04:36 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 09 12:04:36 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 09 12:04:36 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'localpool'
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mds_autoscaler'
Dec 09 12:04:37 compute-0 python3[93049]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mirroring'
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'nfs'
Dec 09 12:04:37 compute-0 ceph-mon[74388]: 5.1b scrub starts
Dec 09 12:04:37 compute-0 ceph-mon[74388]: 5.1b scrub ok
Dec 09 12:04:37 compute-0 ceph-mon[74388]: from='client.? 192.168.122.100:0/1376747930' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 09 12:04:37 compute-0 ceph-mon[74388]: mgrmap e18: compute-0.wfxreg(active, since 8s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:37 compute-0 ceph-mon[74388]: 5.d scrub starts
Dec 09 12:04:37 compute-0 ceph-mon[74388]: 5.d scrub ok
Dec 09 12:04:37 compute-0 ceph-mon[74388]: 6.1f scrub starts
Dec 09 12:04:37 compute-0 ceph-mon[74388]: 6.1f scrub ok
Dec 09 12:04:37 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:37 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000033s ======
Dec 09 12:04:37 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Dec 09 12:04:37 compute-0 python3[93120]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765281876.858282-37365-199020445045697/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 12:04:37 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:37.603+0000 7f14874c8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'orchestrator'
Dec 09 12:04:37 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 09 12:04:37 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 09 12:04:37 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:37.852+0000 7f14874c8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_perf_query'
Dec 09 12:04:37 compute-0 sudo[93168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubwhsemiilbldhntsadawqbbykuejoh ; /usr/bin/python3'
Dec 09 12:04:37 compute-0 sudo[93168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:37 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:37.944+0000 7f14874c8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:04:37 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_support'
Dec 09 12:04:38 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:38.014+0000 7f14874c8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'pg_autoscaler'
Dec 09 12:04:38 compute-0 python3[93170]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:38 compute-0 podman[93171]: 2025-12-09 12:04:38.101701598 +0000 UTC m=+0.054411768 container create d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4 (image=quay.io/ceph/ceph:v19, name=eager_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:38 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:38.101+0000 7f14874c8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'progress'
Dec 09 12:04:38 compute-0 systemd[1]: Started libpod-conmon-d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4.scope.
Dec 09 12:04:38 compute-0 podman[93171]: 2025-12-09 12:04:38.078101587 +0000 UTC m=+0.030811757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:38 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:38 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:38.185+0000 7f14874c8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'prometheus'
Dec 09 12:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e3f1a5f81bf65982c4d205784b83189adb47a20a9e8fcb8497643de078def43/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e3f1a5f81bf65982c4d205784b83189adb47a20a9e8fcb8497643de078def43/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e3f1a5f81bf65982c4d205784b83189adb47a20a9e8fcb8497643de078def43/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:38 compute-0 podman[93171]: 2025-12-09 12:04:38.322533452 +0000 UTC m=+0.275243632 container init d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4 (image=quay.io/ceph/ceph:v19, name=eager_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 09 12:04:38 compute-0 podman[93171]: 2025-12-09 12:04:38.33070642 +0000 UTC m=+0.283416570 container start d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4 (image=quay.io/ceph/ceph:v19, name=eager_payne, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 09 12:04:38 compute-0 podman[93171]: 2025-12-09 12:04:38.334564205 +0000 UTC m=+0.287274375 container attach d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4 (image=quay.io/ceph/ceph:v19, name=eager_payne, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 12:04:38 compute-0 ceph-mon[74388]: 3.1c scrub starts
Dec 09 12:04:38 compute-0 ceph-mon[74388]: 3.1c scrub ok
Dec 09 12:04:38 compute-0 ceph-mon[74388]: 5.b scrub starts
Dec 09 12:04:38 compute-0 ceph-mon[74388]: 5.b scrub ok
Dec 09 12:04:38 compute-0 ceph-mon[74388]: 6.c scrub starts
Dec 09 12:04:38 compute-0 ceph-mon[74388]: 6.c scrub ok
Dec 09 12:04:38 compute-0 ceph-mon[74388]: 4.1a scrub starts
Dec 09 12:04:38 compute-0 ceph-mon[74388]: 4.1a scrub ok
Dec 09 12:04:38 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:38.663+0000 7f14874c8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rbd_support'
Dec 09 12:04:38 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 09 12:04:38 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:38.767+0000 7f14874c8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'restful'
Dec 09 12:04:38 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 09 12:04:38 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rgw'
Dec 09 12:04:39 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:39.226+0000 7f14874c8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:04:39 compute-0 ceph-mgr[74679]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:04:39 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rook'
Dec 09 12:04:39 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:39 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000035s ======
Dec 09 12:04:39 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Dec 09 12:04:39 compute-0 ceph-mon[74388]: 2.c scrub starts
Dec 09 12:04:39 compute-0 ceph-mon[74388]: 2.c scrub ok
Dec 09 12:04:39 compute-0 ceph-mon[74388]: 6.6 scrub starts
Dec 09 12:04:39 compute-0 ceph-mon[74388]: 6.6 scrub ok
Dec 09 12:04:39 compute-0 ceph-mon[74388]: 5.18 scrub starts
Dec 09 12:04:39 compute-0 ceph-mon[74388]: 5.18 scrub ok
Dec 09 12:04:39 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 09 12:04:39 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 09 12:04:39 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:39.892+0000 7f14874c8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:04:39 compute-0 ceph-mgr[74679]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:04:39 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'selftest'
Dec 09 12:04:39 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:39.980+0000 7f14874c8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:04:39 compute-0 ceph-mgr[74679]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:04:39 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'snap_schedule'
Dec 09 12:04:40 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:40.073+0000 7f14874c8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'stats'
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'status'
Dec 09 12:04:40 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:40.235+0000 7f14874c8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telegraf'
Dec 09 12:04:40 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:40.315+0000 7f14874c8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telemetry'
Dec 09 12:04:40 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.lorvly restarted
Dec 09 12:04:40 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.lorvly started
Dec 09 12:04:40 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:40.500+0000 7f14874c8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'test_orchestrator'
Dec 09 12:04:40 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.hvlbot restarted
Dec 09 12:04:40 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.hvlbot started
Dec 09 12:04:40 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.wfxreg(active, since 13s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:40 compute-0 ceph-mon[74388]: 2.d scrub starts
Dec 09 12:04:40 compute-0 ceph-mon[74388]: 2.d scrub ok
Dec 09 12:04:40 compute-0 ceph-mon[74388]: 6.4 scrub starts
Dec 09 12:04:40 compute-0 ceph-mon[74388]: 6.4 scrub ok
Dec 09 12:04:40 compute-0 ceph-mon[74388]: 4.18 scrub starts
Dec 09 12:04:40 compute-0 ceph-mon[74388]: 4.18 scrub ok
Dec 09 12:04:40 compute-0 ceph-mon[74388]: Standby manager daemon compute-1.lorvly restarted
Dec 09 12:04:40 compute-0 ceph-mon[74388]: Standby manager daemon compute-1.lorvly started
Dec 09 12:04:40 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 09 12:04:40 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 09 12:04:40 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:40.775+0000 7f14874c8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:40 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'volumes'
Dec 09 12:04:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:41.107+0000 7f14874c8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'zabbix'
Dec 09 12:04:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:41.199+0000 7f14874c8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:04:41 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wfxreg restarted
Dec 09 12:04:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 09 12:04:41 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wfxreg
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: ms_deliver_dispatch: unhandled message 0x55c2d15e1860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  1: '-n'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  2: 'mgr.compute-0.wfxreg'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  3: '-f'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  4: '--setuser'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  5: 'ceph'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  6: '--setgroup'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  7: 'ceph'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  8: '--default-log-to-file=false'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  9: '--default-log-to-journald=true'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr respawn  exe_path /proc/self/exe
Dec 09 12:04:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 09 12:04:41 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 09 12:04:41 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.wfxreg(active, starting, since 0.0337437s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ignoring --setuser ceph since I am not root
Dec 09 12:04:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: ignoring --setgroup ceph since I am not root
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: pidfile_write: ignore empty --pid-file
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'alerts'
Dec 09 12:04:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:41 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:41 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000036s ======
Dec 09 12:04:41 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Dec 09 12:04:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:41.457+0000 7f7ba6f91140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'balancer'
Dec 09 12:04:41 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:41.550+0000 7f7ba6f91140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 09 12:04:41 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'cephadm'
Dec 09 12:04:41 compute-0 ceph-mon[74388]: 2.10 deep-scrub starts
Dec 09 12:04:41 compute-0 ceph-mon[74388]: 2.10 deep-scrub ok
Dec 09 12:04:41 compute-0 ceph-mon[74388]: Standby manager daemon compute-2.hvlbot restarted
Dec 09 12:04:41 compute-0 ceph-mon[74388]: Standby manager daemon compute-2.hvlbot started
Dec 09 12:04:41 compute-0 ceph-mon[74388]: mgrmap e19: compute-0.wfxreg(active, since 13s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:41 compute-0 ceph-mon[74388]: 6.0 scrub starts
Dec 09 12:04:41 compute-0 ceph-mon[74388]: 6.0 scrub ok
Dec 09 12:04:41 compute-0 ceph-mon[74388]: 5.1c scrub starts
Dec 09 12:04:41 compute-0 ceph-mon[74388]: 5.1c scrub ok
Dec 09 12:04:41 compute-0 ceph-mon[74388]: Active manager daemon compute-0.wfxreg restarted
Dec 09 12:04:41 compute-0 ceph-mon[74388]: Activating manager daemon compute-0.wfxreg
Dec 09 12:04:41 compute-0 ceph-mon[74388]: osdmap e43: 3 total, 3 up, 3 in
Dec 09 12:04:41 compute-0 ceph-mon[74388]: mgrmap e20: compute-0.wfxreg(active, starting, since 0.0337437s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:41 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 09 12:04:41 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 09 12:04:42 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'crash'
Dec 09 12:04:42 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:42.553+0000 7f7ba6f91140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:04:42 compute-0 ceph-mgr[74679]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 09 12:04:42 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'dashboard'
Dec 09 12:04:42 compute-0 ceph-mon[74388]: 5.8 scrub starts
Dec 09 12:04:42 compute-0 ceph-mon[74388]: 5.8 scrub ok
Dec 09 12:04:42 compute-0 ceph-mon[74388]: 6.f scrub starts
Dec 09 12:04:42 compute-0 ceph-mon[74388]: 6.f scrub ok
Dec 09 12:04:42 compute-0 ceph-mon[74388]: 6.e scrub starts
Dec 09 12:04:42 compute-0 ceph-mon[74388]: 6.e scrub ok
Dec 09 12:04:42 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 09 12:04:42 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'devicehealth'
Dec 09 12:04:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:43.318+0000 7f7ba6f91140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'diskprediction_local'
Dec 09 12:04:43 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:43 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000036s ======
Dec 09 12:04:43 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:43.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Dec 09 12:04:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 09 12:04:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 09 12:04:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]:   from numpy import show_config as show_numpy_config
Dec 09 12:04:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:43.526+0000 7f7ba6f91140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'influx'
Dec 09 12:04:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:43.607+0000 7f7ba6f91140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'insights'
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'iostat'
Dec 09 12:04:43 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:43.793+0000 7f7ba6f91140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 09 12:04:43 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'k8sevents'
Dec 09 12:04:43 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 09 12:04:43 compute-0 ceph-mon[74388]: 6.17 scrub starts
Dec 09 12:04:43 compute-0 ceph-mon[74388]: 6.17 scrub ok
Dec 09 12:04:43 compute-0 ceph-mon[74388]: 6.9 scrub starts
Dec 09 12:04:43 compute-0 ceph-mon[74388]: 6.9 scrub ok
Dec 09 12:04:43 compute-0 ceph-mon[74388]: 6.1a scrub starts
Dec 09 12:04:43 compute-0 ceph-mon[74388]: 6.1a scrub ok
Dec 09 12:04:43 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 09 12:04:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'localpool'
Dec 09 12:04:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mds_autoscaler'
Dec 09 12:04:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'mirroring'
Dec 09 12:04:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'nfs'
Dec 09 12:04:44 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 09 12:04:44 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 09 12:04:44 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:44.930+0000 7f7ba6f91140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:04:44 compute-0 ceph-mgr[74679]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 09 12:04:44 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'orchestrator'
Dec 09 12:04:45 compute-0 ceph-mon[74388]: 5.12 scrub starts
Dec 09 12:04:45 compute-0 ceph-mon[74388]: 5.12 scrub ok
Dec 09 12:04:45 compute-0 ceph-mon[74388]: 6.b scrub starts
Dec 09 12:04:45 compute-0 ceph-mon[74388]: 6.b scrub ok
Dec 09 12:04:45 compute-0 ceph-mon[74388]: 6.19 scrub starts
Dec 09 12:04:45 compute-0 ceph-mon[74388]: 6.19 scrub ok
Dec 09 12:04:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:45.189+0000 7f7ba6f91140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_perf_query'
Dec 09 12:04:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:45.280+0000 7f7ba6f91140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'osd_support'
Dec 09 12:04:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:45.355+0000 7f7ba6f91140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'pg_autoscaler'
Dec 09 12:04:45 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:45 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000035s ======
Dec 09 12:04:45 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Dec 09 12:04:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:45.443+0000 7f7ba6f91140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'progress'
Dec 09 12:04:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:45.526+0000 7f7ba6f91140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'prometheus'
Dec 09 12:04:45 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 09 12:04:45 compute-0 systemd[75771]: Activating special unit Exit the Session...
Dec 09 12:04:45 compute-0 systemd[75771]: Stopped target Main User Target.
Dec 09 12:04:45 compute-0 systemd[75771]: Stopped target Basic System.
Dec 09 12:04:45 compute-0 systemd[75771]: Stopped target Paths.
Dec 09 12:04:45 compute-0 systemd[75771]: Stopped target Sockets.
Dec 09 12:04:45 compute-0 systemd[75771]: Stopped target Timers.
Dec 09 12:04:45 compute-0 systemd[75771]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 09 12:04:45 compute-0 systemd[75771]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 09 12:04:45 compute-0 systemd[75771]: Closed D-Bus User Message Bus Socket.
Dec 09 12:04:45 compute-0 systemd[75771]: Stopped Create User's Volatile Files and Directories.
Dec 09 12:04:45 compute-0 systemd[75771]: Removed slice User Application Slice.
Dec 09 12:04:45 compute-0 systemd[75771]: Reached target Shutdown.
Dec 09 12:04:45 compute-0 systemd[75771]: Finished Exit the Session.
Dec 09 12:04:45 compute-0 systemd[75771]: Reached target Exit the Session.
Dec 09 12:04:45 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 09 12:04:45 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 09 12:04:45 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 09 12:04:45 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 09 12:04:45 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 09 12:04:45 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 09 12:04:45 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 09 12:04:45 compute-0 systemd[1]: user-42477.slice: Consumed 37.723s CPU time.
Dec 09 12:04:45 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 09 12:04:45 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 09 12:04:45 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:45.927+0000 7f7ba6f91140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 09 12:04:45 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rbd_support'
Dec 09 12:04:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:46.036+0000 7f7ba6f91140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:04:46 compute-0 ceph-mgr[74679]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 09 12:04:46 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'restful'
Dec 09 12:04:46 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rgw'
Dec 09 12:04:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:46.524+0000 7f7ba6f91140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:04:46 compute-0 ceph-mgr[74679]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 09 12:04:46 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'rook'
Dec 09 12:04:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 4.14 scrub starts
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 4.14 scrub ok
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 6.14 scrub starts
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 6.2 scrub starts
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 6.14 scrub ok
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 6.2 scrub ok
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 2.13 scrub starts
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 2.13 scrub ok
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 6.16 scrub starts
Dec 09 12:04:46 compute-0 ceph-mon[74388]: 6.16 scrub ok
Dec 09 12:04:46 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 09 12:04:46 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 09 12:04:46 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.lorvly restarted
Dec 09 12:04:46 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.lorvly started
Dec 09 12:04:47 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.hvlbot restarted
Dec 09 12:04:47 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.hvlbot started
Dec 09 12:04:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:47.121+0000 7f7ba6f91140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'selftest'
Dec 09 12:04:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:47.198+0000 7f7ba6f91140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'snap_schedule'
Dec 09 12:04:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:47.281+0000 7f7ba6f91140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'stats'
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'status'
Dec 09 12:04:47 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:47 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:04:47 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:47.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:04:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:47.441+0000 7f7ba6f91140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telegraf'
Dec 09 12:04:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:47.517+0000 7f7ba6f91140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'telemetry'
Dec 09 12:04:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:47.682+0000 7f7ba6f91140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'test_orchestrator'
Dec 09 12:04:47 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 09 12:04:47 compute-0 ceph-mon[74388]: 6.3 scrub starts
Dec 09 12:04:47 compute-0 ceph-mon[74388]: 6.3 scrub ok
Dec 09 12:04:47 compute-0 ceph-mon[74388]: 2.15 deep-scrub starts
Dec 09 12:04:47 compute-0 ceph-mon[74388]: 2.15 deep-scrub ok
Dec 09 12:04:47 compute-0 ceph-mon[74388]: 6.a scrub starts
Dec 09 12:04:47 compute-0 ceph-mon[74388]: 6.a scrub ok
Dec 09 12:04:47 compute-0 ceph-mon[74388]: 6.11 scrub starts
Dec 09 12:04:47 compute-0 ceph-mon[74388]: Standby manager daemon compute-1.lorvly restarted
Dec 09 12:04:47 compute-0 ceph-mon[74388]: Standby manager daemon compute-1.lorvly started
Dec 09 12:04:47 compute-0 ceph-mon[74388]: Standby manager daemon compute-2.hvlbot restarted
Dec 09 12:04:47 compute-0 ceph-mon[74388]: Standby manager daemon compute-2.hvlbot started
Dec 09 12:04:47 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 09 12:04:47 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:47.925+0000 7f7ba6f91140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 09 12:04:47 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'volumes'
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.wfxreg(active, starting, since 6s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:48.240+0000 7f7ba6f91140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr[py] Loading python module 'zabbix'
Dec 09 12:04:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mgr-compute-0-wfxreg[74675]: 2025-12-09T12:04:48.313+0000 7f7ba6f91140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wfxreg restarted
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wfxreg
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: ms_deliver_dispatch: unhandled message 0x56062b52d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.wfxreg(active, starting, since 0.19774s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr handle_mgr_map Activating!
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr handle_mgr_map I am now activating
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e1 all = 1
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: balancer
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [balancer INFO root] Starting
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [balancer INFO root] Optimize plan auto_2025-12-09_12:04:48
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Manager daemon compute-0.wfxreg is now available
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: cephadm
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: crash
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: dashboard
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: devicehealth
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO sso] Loading SSO DB version=1
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: iostat
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [devicehealth INFO root] Starting
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: nfs
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: orchestrator
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: pg_autoscaler
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: progress
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [progress INFO root] Loading...
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f7b2ab4fd00>, <progress.module.GhostEvent object at 0x7f7b2ab4ff40>, <progress.module.GhostEvent object at 0x7f7b2ab4ff70>, <progress.module.GhostEvent object at 0x7f7b2ab4ffa0>, <progress.module.GhostEvent object at 0x7f7b2ab4ffd0>, <progress.module.GhostEvent object at 0x7f7b25b07040>, <progress.module.GhostEvent object at 0x7f7b25b07070>, <progress.module.GhostEvent object at 0x7f7b25b070a0>, <progress.module.GhostEvent object at 0x7f7b25b070d0>, <progress.module.GhostEvent object at 0x7f7b25b07100>, <progress.module.GhostEvent object at 0x7f7b25b07130>, <progress.module.GhostEvent object at 0x7f7b25b07160>] historic events
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [progress INFO root] Loaded OSDMap, ready.
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] recovery thread starting
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] starting setup
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: rbd_support
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: restful
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [restful INFO root] server_addr: :: server_port: 8003
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: status
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: telemetry
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [restful WARNING root] server not running: no certificate configured
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: mgr load Constructed class from module: volumes
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] PerfHandler: starting
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TaskHandler: starting
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"} v 0)
Dec 09 12:04:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] setup complete
Dec 09 12:04:48 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 09 12:04:48 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 09 12:04:48 compute-0 ceph-mon[74388]: 6.11 scrub ok
Dec 09 12:04:48 compute-0 ceph-mon[74388]: 5.13 scrub starts
Dec 09 12:04:48 compute-0 ceph-mon[74388]: 5.13 scrub ok
Dec 09 12:04:48 compute-0 ceph-mon[74388]: 6.5 deep-scrub starts
Dec 09 12:04:48 compute-0 ceph-mon[74388]: 6.5 deep-scrub ok
Dec 09 12:04:48 compute-0 ceph-mon[74388]: 6.10 scrub starts
Dec 09 12:04:48 compute-0 ceph-mon[74388]: 6.10 scrub ok
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mgrmap e21: compute-0.wfxreg(active, starting, since 6s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:48 compute-0 ceph-mon[74388]: Active manager daemon compute-0.wfxreg restarted
Dec 09 12:04:48 compute-0 ceph-mon[74388]: Activating manager daemon compute-0.wfxreg
Dec 09 12:04:48 compute-0 ceph-mon[74388]: osdmap e44: 3 total, 3 up, 3 in
Dec 09 12:04:48 compute-0 ceph-mon[74388]: mgrmap e22: compute-0.wfxreg(active, starting, since 0.19774s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wfxreg", "id": "compute-0.wfxreg"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-1.lorvly", "id": "compute-1.lorvly"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mgr metadata", "who": "compute-2.hvlbot", "id": "compute-2.hvlbot"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: Manager daemon compute-0.wfxreg is now available
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/mirror_snapshot_schedule"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wfxreg/trash_purge_schedule"}]: dispatch
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 09 12:04:48 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 09 12:04:49 compute-0 sshd-session[93359]: Accepted publickey for ceph-admin from 192.168.122.100 port 44732 ssh2: RSA SHA256:9gI9N7BVF766ydxek6duxvVO5SKV8ll995eSm4AS2/E
Dec 09 12:04:49 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 09 12:04:49 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 09 12:04:49 compute-0 systemd-logind[799]: New session 35 of user ceph-admin.
Dec 09 12:04:49 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 09 12:04:49 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 09 12:04:49 compute-0 systemd[93374]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:04:49 compute-0 ceph-mgr[74679]: [dashboard INFO dashboard.module] Engine started.
Dec 09 12:04:49 compute-0 systemd[93374]: Queued start job for default target Main User Target.
Dec 09 12:04:49 compute-0 systemd[93374]: Created slice User Application Slice.
Dec 09 12:04:49 compute-0 systemd[93374]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 09 12:04:49 compute-0 systemd[93374]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 12:04:49 compute-0 systemd[93374]: Reached target Paths.
Dec 09 12:04:49 compute-0 systemd[93374]: Reached target Timers.
Dec 09 12:04:49 compute-0 systemd[93374]: Starting D-Bus User Message Bus Socket...
Dec 09 12:04:49 compute-0 systemd[93374]: Starting Create User's Volatile Files and Directories...
Dec 09 12:04:49 compute-0 systemd[93374]: Listening on D-Bus User Message Bus Socket.
Dec 09 12:04:49 compute-0 systemd[93374]: Finished Create User's Volatile Files and Directories.
Dec 09 12:04:49 compute-0 systemd[93374]: Reached target Sockets.
Dec 09 12:04:49 compute-0 systemd[93374]: Reached target Basic System.
Dec 09 12:04:49 compute-0 systemd[93374]: Reached target Main User Target.
Dec 09 12:04:49 compute-0 systemd[93374]: Startup finished in 138ms.
Dec 09 12:04:49 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 09 12:04:49 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec 09 12:04:49 compute-0 sshd-session[93359]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 09 12:04:49 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:49 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000036s ======
Dec 09 12:04:49 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:49.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Dec 09 12:04:49 compute-0 sudo[93391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:49 compute-0 sudo[93391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:49 compute-0 sudo[93391]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.wfxreg(active, since 1.18817s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:49 compute-0 sudo[93416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 09 12:04:49 compute-0 sudo[93416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:49 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14460 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:49 compute-0 ceph-mgr[74679]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 09 12:04:49 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v3: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 09 12:04:49 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0[74384]: 2025-12-09T12:04:49.528+0000 7fcfefaad640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e2 new map
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-09T12:04:49:529513+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:04:49.529449+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 09 12:04:49 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:49 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 12:04:49 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:49 compute-0 ceph-mgr[74679]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 09 12:04:49 compute-0 systemd[1]: libpod-d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4.scope: Deactivated successfully.
Dec 09 12:04:49 compute-0 podman[93171]: 2025-12-09 12:04:49.594092045 +0000 UTC m=+11.546802205 container died d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4 (image=quay.io/ceph/ceph:v19, name=eager_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 09 12:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e3f1a5f81bf65982c4d205784b83189adb47a20a9e8fcb8497643de078def43-merged.mount: Deactivated successfully.
Dec 09 12:04:49 compute-0 podman[93171]: 2025-12-09 12:04:49.642593905 +0000 UTC m=+11.595304055 container remove d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4 (image=quay.io/ceph/ceph:v19, name=eager_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:49 compute-0 systemd[1]: libpod-conmon-d27f6a1cf1fc0a0b75aff3e62b5e1573a7d25e7e214858e164d3ee1a352a5fe4.scope: Deactivated successfully.
Dec 09 12:04:49 compute-0 sudo[93168]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:49 compute-0 sudo[93508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agktdcqirogdgdabfcahneduemwrsaha ; /usr/bin/python3'
Dec 09 12:04:49 compute-0 sudo[93508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:49 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.1d deep-scrub starts
Dec 09 12:04:49 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 6.1d deep-scrub ok
Dec 09 12:04:49 compute-0 ceph-mon[74388]: 3.8 scrub starts
Dec 09 12:04:49 compute-0 ceph-mon[74388]: 3.8 scrub ok
Dec 09 12:04:49 compute-0 ceph-mon[74388]: 6.d scrub starts
Dec 09 12:04:49 compute-0 ceph-mon[74388]: 6.d scrub ok
Dec 09 12:04:49 compute-0 ceph-mon[74388]: 6.13 scrub starts
Dec 09 12:04:49 compute-0 ceph-mon[74388]: 6.13 scrub ok
Dec 09 12:04:49 compute-0 ceph-mon[74388]: mgrmap e23: compute-0.wfxreg(active, since 1.18817s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 09 12:04:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 09 12:04:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 09 12:04:49 compute-0 ceph-mon[74388]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 09 12:04:49 compute-0 ceph-mon[74388]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 09 12:04:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 09 12:04:49 compute-0 ceph-mon[74388]: osdmap e45: 3 total, 3 up, 3 in
Dec 09 12:04:49 compute-0 ceph-mon[74388]: fsmap cephfs:0
Dec 09 12:04:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:49 compute-0 ceph-mon[74388]: 6.1d deep-scrub starts
Dec 09 12:04:49 compute-0 ceph-mon[74388]: 6.1d deep-scrub ok
Dec 09 12:04:50 compute-0 python3[93516]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:50 compute-0 podman[93545]: 2025-12-09 12:04:50.070721382 +0000 UTC m=+0.041350008 container create 4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0 (image=quay.io/ceph/ceph:v19, name=competent_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:50 compute-0 podman[93543]: 2025-12-09 12:04:50.093395652 +0000 UTC m=+0.071029715 container exec a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 09 12:04:50 compute-0 systemd[1]: Started libpod-conmon-4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0.scope.
Dec 09 12:04:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:50 compute-0 podman[93545]: 2025-12-09 12:04:50.052097667 +0000 UTC m=+0.022726313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0baead90b9347cfc2fd918c1fc1f88baf4366dc76988f824d9df00f27a6b71/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0baead90b9347cfc2fd918c1fc1f88baf4366dc76988f824d9df00f27a6b71/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0baead90b9347cfc2fd918c1fc1f88baf4366dc76988f824d9df00f27a6b71/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:50 compute-0 podman[93545]: 2025-12-09 12:04:50.165452611 +0000 UTC m=+0.136081257 container init 4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0 (image=quay.io/ceph/ceph:v19, name=competent_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 09 12:04:50 compute-0 podman[93545]: 2025-12-09 12:04:50.173568277 +0000 UTC m=+0.144196913 container start 4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0 (image=quay.io/ceph/ceph:v19, name=competent_clarke, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 12:04:50 compute-0 podman[93545]: 2025-12-09 12:04:50.179777006 +0000 UTC m=+0.150405692 container attach 4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0 (image=quay.io/ceph/ceph:v19, name=competent_clarke, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 09 12:04:50 compute-0 podman[93543]: 2025-12-09 12:04:50.220256783 +0000 UTC m=+0.197890826 container exec_died a4b836a90c212a6dcd631d0879d1d67c676cdc16d15f42acc55a122ac896ef53 (image=quay.io/ceph/ceph:v19, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:50] ENGINE Bus STARTING
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:50] ENGINE Bus STARTING
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:50] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:50] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:50] ENGINE Client ('192.168.122.100', 57390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:50] ENGINE Client ('192.168.122.100', 57390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v5: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 12:04:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 competent_clarke[93576]: Scheduled mds.cephfs update...
Dec 09 12:04:50 compute-0 systemd[1]: libpod-4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0.scope: Deactivated successfully.
Dec 09 12:04:50 compute-0 podman[93545]: 2025-12-09 12:04:50.583438071 +0000 UTC m=+0.554066697 container died 4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0 (image=quay.io/ceph/ceph:v19, name=competent_clarke, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:50] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:50] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: [cephadm INFO cherrypy.error] [09/Dec/2025:12:04:50] ENGINE Bus STARTED
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : [09/Dec/2025:12:04:50] ENGINE Bus STARTED
Dec 09 12:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a0baead90b9347cfc2fd918c1fc1f88baf4366dc76988f824d9df00f27a6b71-merged.mount: Deactivated successfully.
Dec 09 12:04:50 compute-0 podman[93545]: 2025-12-09 12:04:50.628438847 +0000 UTC m=+0.599067473 container remove 4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0 (image=quay.io/ceph/ceph:v19, name=competent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 12:04:50 compute-0 systemd[1]: libpod-conmon-4986dbca79d2b8df563dc2e0edf2bb17a698611ee830c6ad4908889761fcf9c0.scope: Deactivated successfully.
Dec 09 12:04:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:50 compute-0 sudo[93508]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:50 compute-0 ceph-mgr[74679]: [devicehealth INFO root] Check health
Dec 09 12:04:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 podman[93760]: 2025-12-09 12:04:50.762185751 +0000 UTC m=+0.054036556 container exec d845d38373399b27c5f961cd5a983c0c22677b6f0a8c8a9ec8bc84c5563a3da9 (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd)
Dec 09 12:04:50 compute-0 podman[93760]: 2025-12-09 12:04:50.793577517 +0000 UTC m=+0.085428292 container exec_died d845d38373399b27c5f961cd5a983c0c22677b6f0a8c8a9ec8bc84c5563a3da9 (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-rgw-default-compute-0-rutkbd)
Dec 09 12:04:50 compute-0 sudo[93818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jokvojbfrkqeffvtuzzodmlubivbstgo ; /usr/bin/python3'
Dec 09 12:04:50 compute-0 sudo[93818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:50 compute-0 ceph-mon[74388]: 6.12 scrub starts
Dec 09 12:04:50 compute-0 ceph-mon[74388]: 6.12 scrub ok
Dec 09 12:04:50 compute-0 ceph-mon[74388]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:50 compute-0 ceph-mon[74388]: 6.15 scrub starts
Dec 09 12:04:50 compute-0 ceph-mon[74388]: 6.15 scrub ok
Dec 09 12:04:50 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:50 compute-0 python3[93824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:50 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.wfxreg(active, since 2s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:51 compute-0 podman[93852]: 2025-12-09 12:04:51.017953614 +0000 UTC m=+0.066392841 container exec 8a80e9c07f8151d01c8e0945e5cbbf405a1c7fd22d214e114f9c95218a689c8b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:04:51 compute-0 podman[93852]: 2025-12-09 12:04:51.025160068 +0000 UTC m=+0.073599275 container exec_died 8a80e9c07f8151d01c8e0945e5cbbf405a1c7fd22d214e114f9c95218a689c8b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:04:51 compute-0 podman[93865]: 2025-12-09 12:04:51.055945573 +0000 UTC m=+0.053901831 container create 76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec (image=quay.io/ceph/ceph:v19, name=nice_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:51 compute-0 systemd[1]: Started libpod-conmon-76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec.scope.
Dec 09 12:04:51 compute-0 sudo[93416]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:51 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9657ee64a50691d912cfef24944f7704dee529780754ab0175adac8ada7488f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9657ee64a50691d912cfef24944f7704dee529780754ab0175adac8ada7488f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9657ee64a50691d912cfef24944f7704dee529780754ab0175adac8ada7488f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:51 compute-0 podman[93865]: 2025-12-09 12:04:51.124123826 +0000 UTC m=+0.122080104 container init 76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec (image=quay.io/ceph/ceph:v19, name=nice_chebyshev, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 podman[93865]: 2025-12-09 12:04:51.029436489 +0000 UTC m=+0.027392767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:51 compute-0 podman[93865]: 2025-12-09 12:04:51.131879719 +0000 UTC m=+0.129835977 container start 76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec (image=quay.io/ceph/ceph:v19, name=nice_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec 09 12:04:51 compute-0 podman[93865]: 2025-12-09 12:04:51.136381967 +0000 UTC m=+0.134338215 container attach 76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec (image=quay.io/ceph/ceph:v19, name=nice_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 sudo[93905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:51 compute-0 sudo[93905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:51 compute-0 sudo[93905]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:51 compute-0 sudo[93930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 09 12:04:51 compute-0 sudo[93930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:51 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:51 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000035s ======
Dec 09 12:04:51 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:51.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Dec 09 12:04:51 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:51 compute-0 sudo[93930]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:51 compute-0 sudo[94008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:51 compute-0 sudo[94008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:51 compute-0 sudo[94008]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:51 compute-0 sudo[94033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 09 12:04:51 compute-0 sudo[94033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:51 compute-0 ceph-mon[74388]: 3.15 scrub starts
Dec 09 12:04:51 compute-0 ceph-mon[74388]: 3.15 scrub ok
Dec 09 12:04:51 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:50] ENGINE Bus STARTING
Dec 09 12:04:51 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:50] ENGINE Serving on https://192.168.122.100:7150
Dec 09 12:04:51 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:50] ENGINE Client ('192.168.122.100', 57390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 09 12:04:51 compute-0 ceph-mon[74388]: pgmap v5: 166 pgs: 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:51 compute-0 ceph-mon[74388]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:51 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:50] ENGINE Serving on http://192.168.122.100:8765
Dec 09 12:04:51 compute-0 ceph-mon[74388]: [09/Dec/2025:12:04:50] ENGINE Bus STARTED
Dec 09 12:04:51 compute-0 ceph-mon[74388]: 6.7 deep-scrub starts
Dec 09 12:04:51 compute-0 ceph-mon[74388]: 6.7 deep-scrub ok
Dec 09 12:04:51 compute-0 ceph-mon[74388]: mgrmap e24: compute-0.wfxreg(active, since 2s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:52 compute-0 sudo[94033]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 09 12:04:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 09 12:04:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 09 12:04:52 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 09 12:04:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec 09 12:04:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 09 12:04:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:04:52 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 09 12:04:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 sudo[94074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 12:04:52 compute-0 sudo[94074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94074]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 sudo[94099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph
Dec 09 12:04:52 compute-0 sudo[94099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94099]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 sudo[94124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:52 compute-0 sudo[94124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94124]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 sudo[94149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:52 compute-0 sudo[94149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94149]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 sudo[94174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:52 compute-0 sudo[94174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94174]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v7: 167 pgs: 1 unknown, 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:52 compute-0 sudo[94222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:52 compute-0 sudo[94222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94222]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 sudo[94247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new
Dec 09 12:04:52 compute-0 sudo[94247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94247]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 sudo[94272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 sudo[94272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94272]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:52 compute-0 sudo[94297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:52 compute-0 sudo[94297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94297]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 sudo[94322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:52 compute-0 sudo[94322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94322]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 sudo[94347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:52 compute-0 sudo[94347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:52 compute-0 sudo[94347]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:52 compute-0 ceph-mon[74388]: 6.1c scrub starts
Dec 09 12:04:52 compute-0 ceph-mon[74388]: 6.1c scrub ok
Dec 09 12:04:52 compute-0 ceph-mon[74388]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mon[74388]: 6.8 scrub starts
Dec 09 12:04:52 compute-0 ceph-mon[74388]: 6.8 scrub ok
Dec 09 12:04:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 09 12:04:52 compute-0 ceph-mon[74388]: osdmap e46: 3 total, 3 up, 3 in
Dec 09 12:04:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 09 12:04:52 compute-0 ceph-mon[74388]: Updating compute-0:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mon[74388]: Updating compute-1:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 ceph-mon[74388]: Updating compute-2:/etc/ceph/ceph.conf
Dec 09 12:04:52 compute-0 sudo[94372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:53 compute-0 sudo[94372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94372]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 sudo[94397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:53 compute-0 sudo[94397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94397]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 09 12:04:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 09 12:04:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 09 12:04:53 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 09 12:04:53 compute-0 ceph-mon[74388]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:04:53 compute-0 sudo[94445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:53 compute-0 sudo[94445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94445]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:53 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.wfxreg(active, since 4s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 09 12:04:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 09 12:04:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:53 compute-0 sudo[94480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new
Dec 09 12:04:53 compute-0 sudo[94480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94480]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 systemd[1]: libpod-76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec.scope: Deactivated successfully.
Dec 09 12:04:53 compute-0 podman[93865]: 2025-12-09 12:04:53.262237256 +0000 UTC m=+2.260193514 container died 76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec (image=quay.io/ceph/ceph:v19, name=nice_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9657ee64a50691d912cfef24944f7704dee529780754ab0175adac8ada7488f-merged.mount: Deactivated successfully.
Dec 09 12:04:53 compute-0 sudo[94506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf.new /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:53 compute-0 sudo[94506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94506]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 podman[93865]: 2025-12-09 12:04:53.329439144 +0000 UTC m=+2.327395392 container remove 76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec (image=quay.io/ceph/ceph:v19, name=nice_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:53 compute-0 sudo[93818]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 systemd[1]: libpod-conmon-76d3bdb80b712eee6deff34b51b9d331206d933dc7efc9125000516c7679d9ec.scope: Deactivated successfully.
Dec 09 12:04:53 compute-0 sudo[94542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 09 12:04:53 compute-0 sudo[94542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94542]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 sudo[94567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph
Dec 09 12:04:53 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:53 compute-0 sudo[94567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:04:53 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:53.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:04:53 compute-0 sudo[94567]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 sudo[94592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:04:53 compute-0 sudo[94592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94592]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 sudo[94617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:53 compute-0 sudo[94617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94617]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 sudo[94642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:04:53 compute-0 sudo[94642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94642]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 sudo[94690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:04:53 compute-0 sudo[94690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94690]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 sudo[94715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new
Dec 09 12:04:53 compute-0 sudo[94715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94715]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 sudo[94740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 sudo[94740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94740]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 sudo[94765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:53 compute-0 sudo[94765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94765]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 sudo[94790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config
Dec 09 12:04:53 compute-0 sudo[94790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:53 compute-0 sudo[94790]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:53 compute-0 sudo[94839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dktoumvhucdshzqyojdiylbmkyffcnka ; /usr/bin/python3'
Dec 09 12:04:53 compute-0 sudo[94839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:53 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:54 compute-0 sudo[94838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:04:54 compute-0 sudo[94838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:54 compute-0 sudo[94838]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:54 compute-0 sudo[94866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:04:54 compute-0 sudo[94866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:54 compute-0 sudo[94866]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:54 compute-0 python3[94857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid glance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:54 compute-0 sudo[94891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:04:54 compute-0 sudo[94891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:54 compute-0 sudo[94891]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 09 12:04:54 compute-0 ceph-mon[74388]: pgmap v7: 167 pgs: 1 unknown, 166 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:04:54 compute-0 ceph-mon[74388]: Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:54 compute-0 ceph-mon[74388]: Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:54 compute-0 ceph-mon[74388]: Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.conf
Dec 09 12:04:54 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 09 12:04:54 compute-0 ceph-mon[74388]: osdmap e47: 3 total, 3 up, 3 in
Dec 09 12:04:54 compute-0 ceph-mon[74388]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 09 12:04:54 compute-0 ceph-mon[74388]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mgrmap e25: compute-0.wfxreg(active, since 4s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:54 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 ceph-mon[74388]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 09 12:04:54 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 ceph-mon[74388]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 09 12:04:54 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 09 12:04:54 compute-0 podman[94915]: 2025-12-09 12:04:54.190471358 +0000 UTC m=+0.044662655 container create 742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47 (image=quay.io/ceph/ceph:v19, name=vigilant_elbakyan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:04:54 compute-0 systemd[1]: Started libpod-conmon-742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47.scope.
Dec 09 12:04:54 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:54 compute-0 sudo[94954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e079e98a692dc7595e1069eacfceb2e3d402640d1fea3e85a4643daafce5985/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e079e98a692dc7595e1069eacfceb2e3d402640d1fea3e85a4643daafce5985/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:54 compute-0 sudo[94954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:54 compute-0 sudo[94954]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:54 compute-0 podman[94915]: 2025-12-09 12:04:54.265403619 +0000 UTC m=+0.119594946 container init 742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47 (image=quay.io/ceph/ceph:v19, name=vigilant_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:04:54 compute-0 podman[94915]: 2025-12-09 12:04:54.172148022 +0000 UTC m=+0.026339339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:54 compute-0 podman[94915]: 2025-12-09 12:04:54.272029592 +0000 UTC m=+0.126220889 container start 742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47 (image=quay.io/ceph/ceph:v19, name=vigilant_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:54 compute-0 podman[94915]: 2025-12-09 12:04:54.275964001 +0000 UTC m=+0.130155298 container attach 742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47 (image=quay.io/ceph/ceph:v19, name=vigilant_elbakyan, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 09 12:04:54 compute-0 sudo[94983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new
Dec 09 12:04:54 compute-0 sudo[94983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:54 compute-0 sudo[94983]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:54 compute-0 sudo[95034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-750b57e3-924f-51a5-ab09-01517535f732/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring.new /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:54 compute-0 sudo[95034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:54 compute-0 sudo[95034]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:04:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:04:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 vigilant_elbakyan[94970]: could not fetch user info: no user info saved
Dec 09 12:04:54 compute-0 systemd[1]: libpod-742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47.scope: Deactivated successfully.
Dec 09 12:04:54 compute-0 conmon[94970]: conmon 742d127ecaf1e9b2c01f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47.scope/container/memory.events
Dec 09 12:04:54 compute-0 podman[94915]: 2025-12-09 12:04:54.510873619 +0000 UTC m=+0.365064916 container died 742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47 (image=quay.io/ceph/ceph:v19, name=vigilant_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 09 12:04:54 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v10: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 17 op/s
Dec 09 12:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e079e98a692dc7595e1069eacfceb2e3d402640d1fea3e85a4643daafce5985-merged.mount: Deactivated successfully.
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:54 compute-0 podman[94915]: 2025-12-09 12:04:54.546720152 +0000 UTC m=+0.400911449 container remove 742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47 (image=quay.io/ceph/ceph:v19, name=vigilant_elbakyan, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:04:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:54 compute-0 systemd[1]: libpod-conmon-742d127ecaf1e9b2c01fca3083b912dde2dfbc068d9915f9e207c73d97f43b47.scope: Deactivated successfully.
Dec 09 12:04:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 sudo[94839]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 09 12:04:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:54 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 5e0e9545-f2de-48cb-bd1f-d58cb679a0f9 (Updating node-exporter deployment (+2 -> 3))
Dec 09 12:04:54 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec 09 12:04:54 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec 09 12:04:54 compute-0 sudo[95151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkdezfhilpusgpkpwzwftorvbakcipnr ; /usr/bin/python3'
Dec 09 12:04:54 compute-0 sudo[95151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:54 compute-0 python3[95153]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 750b57e3-924f-51a5-ab09-01517535f732 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="glance" --display-name="Glance S3 User" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 12:04:54 compute-0 podman[95154]: 2025-12-09 12:04:54.923108377 +0000 UTC m=+0.056226113 container create 63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a (image=quay.io/ceph/ceph:v19, name=quirky_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 09 12:04:54 compute-0 podman[95154]: 2025-12-09 12:04:54.892180077 +0000 UTC m=+0.025297843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:54 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 12:04:55 compute-0 systemd[1]: Started libpod-conmon-63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a.scope.
Dec 09 12:04:55 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2bcb098e606110b71f06994483c34c386afd6faa58ac6902c9bbb5bbeae7070/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2bcb098e606110b71f06994483c34c386afd6faa58ac6902c9bbb5bbeae7070/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:55 compute-0 podman[95154]: 2025-12-09 12:04:55.170370291 +0000 UTC m=+0.303488037 container init 63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a (image=quay.io/ceph/ceph:v19, name=quirky_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 09 12:04:55 compute-0 podman[95154]: 2025-12-09 12:04:55.177874275 +0000 UTC m=+0.310992011 container start 63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a (image=quay.io/ceph/ceph:v19, name=quirky_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 09 12:04:55 compute-0 podman[95154]: 2025-12-09 12:04:55.182643063 +0000 UTC m=+0.315760819 container attach 63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a (image=quay.io/ceph/ceph:v19, name=quirky_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 09 12:04:55 compute-0 ceph-mon[74388]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:55 compute-0 ceph-mon[74388]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 09 12:04:55 compute-0 ceph-mon[74388]: Updating compute-0:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:55 compute-0 ceph-mon[74388]: Updating compute-1:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:55 compute-0 ceph-mon[74388]: Updating compute-2:/var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/config/ceph.client.admin.keyring
Dec 09 12:04:55 compute-0 ceph-mon[74388]: osdmap e48: 3 total, 3 up, 3 in
Dec 09 12:04:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:55 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.wfxreg(active, since 6s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:55 compute-0 quirky_jennings[95170]: {
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "user_id": "glance",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "display_name": "Glance S3 User",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "email": "",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "suspended": 0,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "max_buckets": 1000,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "subusers": [],
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "keys": [
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         {
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:             "user": "glance",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:             "access_key": "IHHSO6U1EW68RK0J6YRA",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:             "secret_key": "sSkxyJIJVlZZhSutsaYck8dTNM397DguYmXHh4kU",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:             "active": true,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:             "create_date": "2025-12-09T12:04:55.326142Z"
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         }
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     ],
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "swift_keys": [],
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "caps": [],
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "op_mask": "read, write, delete",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "default_placement": "",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "default_storage_class": "",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "placement_tags": [],
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "bucket_quota": {
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "enabled": false,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "check_on_raw": false,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "max_size": -1,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "max_size_kb": 0,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "max_objects": -1
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     },
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "user_quota": {
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "enabled": false,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "check_on_raw": false,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "max_size": -1,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "max_size_kb": 0,
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:         "max_objects": -1
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     },
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "temp_url_keys": [],
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "type": "rgw",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "mfa_ids": [],
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "account_id": "",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "path": "/",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "create_date": "2025-12-09T12:04:55.325682Z",
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "tags": [],
Dec 09 12:04:55 compute-0 quirky_jennings[95170]:     "group_ids": []
Dec 09 12:04:55 compute-0 quirky_jennings[95170]: }
Dec 09 12:04:55 compute-0 quirky_jennings[95170]: 
Dec 09 12:04:55 compute-0 systemd[1]: libpod-63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a.scope: Deactivated successfully.
Dec 09 12:04:55 compute-0 podman[95154]: 2025-12-09 12:04:55.395143082 +0000 UTC m=+0.528260838 container died 63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a (image=quay.io/ceph/ceph:v19, name=quirky_jennings, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 09 12:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2bcb098e606110b71f06994483c34c386afd6faa58ac6902c9bbb5bbeae7070-merged.mount: Deactivated successfully.
Dec 09 12:04:55 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 09 12:04:55 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:55 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000035s ======
Dec 09 12:04:55 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:55.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Dec 09 12:04:55 compute-0 podman[95154]: 2025-12-09 12:04:55.443453334 +0000 UTC m=+0.576571080 container remove 63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a (image=quay.io/ceph/ceph:v19, name=quirky_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 12:04:55 compute-0 systemd[1]: libpod-conmon-63bf7d6f1c460604fff065b349511fffef9b795c39af670b3a694d59c40f3d8a.scope: Deactivated successfully.
Dec 09 12:04:55 compute-0 sudo[95151]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:55 compute-0 sudo[95292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-harvwpalphlnvucpnhcryouxlcfwqeaw ; /usr/bin/python3'
Dec 09 12:04:55 compute-0 sudo[95292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:04:55 compute-0 podman[95295]: 2025-12-09 12:04:55.831206219 +0000 UTC m=+0.040368573 container create 2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3 (image=quay.io/ceph/ceph:v19, name=friendly_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 09 12:04:55 compute-0 systemd[1]: Started libpod-conmon-2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3.scope.
Dec 09 12:04:55 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13d9ad8921a3e576699131cb7171e4e4666aa10166d29cd44b05dcb57a1f65bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13d9ad8921a3e576699131cb7171e4e4666aa10166d29cd44b05dcb57a1f65bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:04:55 compute-0 podman[95295]: 2025-12-09 12:04:55.814460929 +0000 UTC m=+0.023623303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 09 12:04:55 compute-0 podman[95295]: 2025-12-09 12:04:55.911189708 +0000 UTC m=+0.120352092 container init 2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3 (image=quay.io/ceph/ceph:v19, name=friendly_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 09 12:04:55 compute-0 podman[95295]: 2025-12-09 12:04:55.916021518 +0000 UTC m=+0.125183872 container start 2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3 (image=quay.io/ceph/ceph:v19, name=friendly_lumiere, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 09 12:04:55 compute-0 podman[95295]: 2025-12-09 12:04:55.919361516 +0000 UTC m=+0.128523890 container attach 2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3 (image=quay.io/ceph/ceph:v19, name=friendly_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]: {
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "user_id": "glance",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "display_name": "Glance S3 User",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "email": "",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "suspended": 0,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "max_buckets": 1000,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "subusers": [],
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "keys": [
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         {
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:             "user": "glance",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:             "access_key": "IHHSO6U1EW68RK0J6YRA",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:             "secret_key": "sSkxyJIJVlZZhSutsaYck8dTNM397DguYmXHh4kU",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:             "active": true,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:             "create_date": "2025-12-09T12:04:55.326142Z"
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         }
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     ],
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "swift_keys": [],
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "caps": [],
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "op_mask": "read, write, delete",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "default_placement": "",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "default_storage_class": "",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "placement_tags": [],
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "bucket_quota": {
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "enabled": false,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "check_on_raw": false,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "max_size": -1,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "max_size_kb": 0,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "max_objects": -1
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     },
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "user_quota": {
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "enabled": false,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "check_on_raw": false,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "max_size": -1,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "max_size_kb": 0,
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:         "max_objects": -1
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     },
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "temp_url_keys": [],
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "type": "rgw",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "mfa_ids": [],
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "account_id": "",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "path": "/",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "create_date": "2025-12-09T12:04:55.325682Z",
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "tags": [],
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]:     "group_ids": []
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]: }
Dec 09 12:04:56 compute-0 friendly_lumiere[95310]: 
Dec 09 12:04:56 compute-0 systemd[1]: libpod-2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3.scope: Deactivated successfully.
Dec 09 12:04:56 compute-0 podman[95295]: 2025-12-09 12:04:56.133910727 +0000 UTC m=+0.343073081 container died 2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3 (image=quay.io/ceph/ceph:v19, name=friendly_lumiere, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-13d9ad8921a3e576699131cb7171e4e4666aa10166d29cd44b05dcb57a1f65bc-merged.mount: Deactivated successfully.
Dec 09 12:04:56 compute-0 podman[95295]: 2025-12-09 12:04:56.16550521 +0000 UTC m=+0.374667564 container remove 2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3 (image=quay.io/ceph/ceph:v19, name=friendly_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 12:04:56 compute-0 systemd[1]: libpod-conmon-2daad159ba576c4f85e6eab4bc4e0c24856eb8fcfa35ee058818c510c55c9ff3.scope: Deactivated successfully.
Dec 09 12:04:56 compute-0 sudo[95292]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:56 compute-0 ceph-mon[74388]: pgmap v10: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 17 op/s
Dec 09 12:04:56 compute-0 ceph-mon[74388]: Deploying daemon node-exporter.compute-1 on compute-1
Dec 09 12:04:56 compute-0 ceph-mon[74388]: mgrmap e26: compute-0.wfxreg(active, since 6s), standbys: compute-1.lorvly, compute-2.hvlbot
Dec 09 12:04:56 compute-0 ceph-mon[74388]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 09 12:04:56 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v11: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 14 op/s
Dec 09 12:04:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:04:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:04:57 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:04:57 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 09 12:04:57 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:57 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec 09 12:04:57 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec 09 12:04:57 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:57 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:04:57 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:57.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:04:58 compute-0 ceph-mon[74388]: pgmap v11: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 14 op/s
Dec 09 12:04:58 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:58 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:58 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:58 compute-0 ceph-mon[74388]: Deploying daemon node-exporter.compute-2 on compute-2
Dec 09 12:04:58 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v12: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 13 op/s
Dec 09 12:04:59 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:04:59 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:04:59 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:04:59.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:04:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:04:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:04:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 09 12:04:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:59 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 5e0e9545-f2de-48cb-bd1f-d58cb679a0f9 (Updating node-exporter deployment (+2 -> 3))
Dec 09 12:04:59 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 5e0e9545-f2de-48cb-bd1f-d58cb679a0f9 (Updating node-exporter deployment (+2 -> 3)) in 5 seconds
Dec 09 12:04:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 09 12:04:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:04:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 09 12:04:59 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 09 12:04:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 09 12:04:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:04:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:04:59 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:04:59 compute-0 sudo[95410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:04:59 compute-0 sudo[95410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:59 compute-0 sudo[95410]: pam_unix(sudo:session): session closed for user root
Dec 09 12:04:59 compute-0 sudo[95435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 09 12:04:59 compute-0 sudo[95435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:04:59 compute-0 podman[95499]: 2025-12-09 12:04:59.989024657 +0000 UTC m=+0.036884600 container create 4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 09 12:05:00 compute-0 systemd[1]: Started libpod-conmon-4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4.scope.
Dec 09 12:05:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:00 compute-0 ceph-mon[74388]: pgmap v12: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 13 op/s
Dec 09 12:05:00 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:00 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:00 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:00 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:00 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 09 12:05:00 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 09 12:05:00 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:00 compute-0 podman[95499]: 2025-12-09 12:05:00.05236503 +0000 UTC m=+0.100225003 container init 4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 09 12:05:00 compute-0 podman[95499]: 2025-12-09 12:05:00.05777131 +0000 UTC m=+0.105631253 container start 4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:05:00 compute-0 podman[95499]: 2025-12-09 12:05:00.061695558 +0000 UTC m=+0.109555761 container attach 4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 09 12:05:00 compute-0 beautiful_shirley[95515]: 167 167
Dec 09 12:05:00 compute-0 systemd[1]: libpod-4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4.scope: Deactivated successfully.
Dec 09 12:05:00 compute-0 conmon[95515]: conmon 4560391294c108e4c0ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4.scope/container/memory.events
Dec 09 12:05:00 compute-0 podman[95499]: 2025-12-09 12:05:00.06399404 +0000 UTC m=+0.111854013 container died 4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:05:00 compute-0 podman[95499]: 2025-12-09 12:04:59.9737828 +0000 UTC m=+0.021642763 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f648f4e8a944efadbb26c639c1ac37ff802af6419ca81a98c040d91808043d01-merged.mount: Deactivated successfully.
Dec 09 12:05:00 compute-0 podman[95499]: 2025-12-09 12:05:00.103405478 +0000 UTC m=+0.151265421 container remove 4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:05:00 compute-0 systemd[1]: libpod-conmon-4560391294c108e4c0ac41675dbee2695c07de804b06aa8718fe3197be4cf3e4.scope: Deactivated successfully.
Dec 09 12:05:00 compute-0 podman[95539]: 2025-12-09 12:05:00.247239747 +0000 UTC m=+0.037697539 container create 15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_torvalds, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 12:05:00 compute-0 systemd[1]: Started libpod-conmon-15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea.scope.
Dec 09 12:05:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376c8dff6331ef5ab111464cbcb2b189b12ce2a713cf7bccec31d28765ea79e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376c8dff6331ef5ab111464cbcb2b189b12ce2a713cf7bccec31d28765ea79e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376c8dff6331ef5ab111464cbcb2b189b12ce2a713cf7bccec31d28765ea79e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376c8dff6331ef5ab111464cbcb2b189b12ce2a713cf7bccec31d28765ea79e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376c8dff6331ef5ab111464cbcb2b189b12ce2a713cf7bccec31d28765ea79e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:00 compute-0 podman[95539]: 2025-12-09 12:05:00.229367297 +0000 UTC m=+0.019825119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:00 compute-0 podman[95539]: 2025-12-09 12:05:00.330722309 +0000 UTC m=+0.121180101 container init 15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 09 12:05:00 compute-0 podman[95539]: 2025-12-09 12:05:00.33954044 +0000 UTC m=+0.129998252 container start 15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_torvalds, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 09 12:05:00 compute-0 podman[95539]: 2025-12-09 12:05:00.343056294 +0000 UTC m=+0.133514106 container attach 15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:05:00 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v13: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 255 B/s wr, 14 op/s
Dec 09 12:05:00 compute-0 great_torvalds[95555]: --> passed data devices: 0 physical, 1 LVM
Dec 09 12:05:00 compute-0 great_torvalds[95555]: --> All data devices are unavailable
Dec 09 12:05:00 compute-0 systemd[1]: libpod-15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea.scope: Deactivated successfully.
Dec 09 12:05:00 compute-0 podman[95539]: 2025-12-09 12:05:00.691860416 +0000 UTC m=+0.482318208 container died 15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec 09 12:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-376c8dff6331ef5ab111464cbcb2b189b12ce2a713cf7bccec31d28765ea79e4-merged.mount: Deactivated successfully.
Dec 09 12:05:00 compute-0 podman[95539]: 2025-12-09 12:05:00.734464688 +0000 UTC m=+0.524922480 container remove 15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 09 12:05:00 compute-0 systemd[1]: libpod-conmon-15d7c3f83df3aa9504a89c6f4da958c490fa04ddde0c623ac3f943973ddfa9ea.scope: Deactivated successfully.
Dec 09 12:05:00 compute-0 sudo[95435]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:00 compute-0 sudo[95583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:05:00 compute-0 sudo[95583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:00 compute-0 sudo[95583]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:00 compute-0 sudo[95608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- lvm list --format json
Dec 09 12:05:00 compute-0 sudo[95608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:01 compute-0 podman[95673]: 2025-12-09 12:05:01.248417759 +0000 UTC m=+0.036223757 container create 045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:05:01 compute-0 systemd[1]: Started libpod-conmon-045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972.scope.
Dec 09 12:05:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:01 compute-0 podman[95673]: 2025-12-09 12:05:01.306034921 +0000 UTC m=+0.093840919 container init 045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:05:01 compute-0 podman[95673]: 2025-12-09 12:05:01.311029776 +0000 UTC m=+0.098835784 container start 045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:05:01 compute-0 podman[95673]: 2025-12-09 12:05:01.313547195 +0000 UTC m=+0.101353213 container attach 045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:05:01 compute-0 interesting_rhodes[95690]: 167 167
Dec 09 12:05:01 compute-0 systemd[1]: libpod-045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972.scope: Deactivated successfully.
Dec 09 12:05:01 compute-0 podman[95673]: 2025-12-09 12:05:01.314917114 +0000 UTC m=+0.102723112 container died 045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:05:01 compute-0 podman[95673]: 2025-12-09 12:05:01.233675011 +0000 UTC m=+0.021481029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c74dbe5986add831a14e37d7e5506f9b58729799508c666c7d3c9782d8c9dae-merged.mount: Deactivated successfully.
Dec 09 12:05:01 compute-0 podman[95673]: 2025-12-09 12:05:01.346489796 +0000 UTC m=+0.134295794 container remove 045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 09 12:05:01 compute-0 systemd[1]: libpod-conmon-045cec0e5e7813e513d5774c3c50c28a190354acb08e2c02f29c777a1e9b3972.scope: Deactivated successfully.
Dec 09 12:05:01 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:01 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000036s ======
Dec 09 12:05:01 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Dec 09 12:05:01 compute-0 podman[95713]: 2025-12-09 12:05:01.507076475 +0000 UTC m=+0.039206022 container create f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_dhawan, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 09 12:05:01 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:01 compute-0 systemd[1]: Started libpod-conmon-f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370.scope.
Dec 09 12:05:01 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd19f46f6119942f969285f47eff0d25d4b30b1211e63a93fe928ab0e8a255e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:01 compute-0 podman[95713]: 2025-12-09 12:05:01.490797622 +0000 UTC m=+0.022927189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd19f46f6119942f969285f47eff0d25d4b30b1211e63a93fe928ab0e8a255e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd19f46f6119942f969285f47eff0d25d4b30b1211e63a93fe928ab0e8a255e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd19f46f6119942f969285f47eff0d25d4b30b1211e63a93fe928ab0e8a255e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:01 compute-0 podman[95713]: 2025-12-09 12:05:01.594097132 +0000 UTC m=+0.126226689 container init f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_dhawan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:05:01 compute-0 podman[95713]: 2025-12-09 12:05:01.601493052 +0000 UTC m=+0.133622599 container start f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 09 12:05:01 compute-0 podman[95713]: 2025-12-09 12:05:01.60426028 +0000 UTC m=+0.136389847 container attach f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]: {
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:     "1": [
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:         {
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "devices": [
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "/dev/loop3"
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             ],
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "lv_name": "ceph_lv0",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "lv_size": "21470642176",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=750b57e3-924f-51a5-ab09-01517535f732,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0cb4756c-1cb3-414f-a66b-4ca287023452,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "lv_uuid": "NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "name": "ceph_lv0",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "tags": {
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.block_uuid": "NmXN7G-RzdJ-ddgq-wQWO-4Bzg-8Ecu-xD2Ou5",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.cephx_lockbox_secret": "",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.cluster_fsid": "750b57e3-924f-51a5-ab09-01517535f732",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.cluster_name": "ceph",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.crush_device_class": "",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.encrypted": "0",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.osd_fsid": "0cb4756c-1cb3-414f-a66b-4ca287023452",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.osd_id": "1",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.type": "block",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.vdo": "0",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:                 "ceph.with_tpm": "0"
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             },
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "type": "block",
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:             "vg_name": "ceph_vg0"
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:         }
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]:     ]
Dec 09 12:05:01 compute-0 adoring_dhawan[95729]: }
Dec 09 12:05:01 compute-0 systemd[1]: libpod-f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370.scope: Deactivated successfully.
Dec 09 12:05:01 compute-0 podman[95713]: 2025-12-09 12:05:01.891182952 +0000 UTC m=+0.423312519 container died f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 09 12:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd19f46f6119942f969285f47eff0d25d4b30b1211e63a93fe928ab0e8a255e7-merged.mount: Deactivated successfully.
Dec 09 12:05:01 compute-0 podman[95713]: 2025-12-09 12:05:01.941959331 +0000 UTC m=+0.474088878 container remove f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:05:01 compute-0 systemd[1]: libpod-conmon-f96c964caecf520a5ac97b0b06de7885d007ffc457c7cba5c022460c31158370.scope: Deactivated successfully.
Dec 09 12:05:02 compute-0 sudo[95608]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:02 compute-0 sudo[95753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:05:02 compute-0 ceph-mon[74388]: pgmap v13: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 255 B/s wr, 14 op/s
Dec 09 12:05:02 compute-0 sudo[95753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:02 compute-0 sudo[95753]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:02 compute-0 sudo[95778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 750b57e3-924f-51a5-ab09-01517535f732 -- raw list --format json
Dec 09 12:05:02 compute-0 sudo[95778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:02 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v14: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 218 B/s wr, 12 op/s
Dec 09 12:05:02 compute-0 podman[95842]: 2025-12-09 12:05:02.542262606 +0000 UTC m=+0.051838967 container create 49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 09 12:05:02 compute-0 systemd[1]: Started libpod-conmon-49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1.scope.
Dec 09 12:05:02 compute-0 podman[95842]: 2025-12-09 12:05:02.515448772 +0000 UTC m=+0.025025223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:02 compute-0 podman[95842]: 2025-12-09 12:05:02.628006339 +0000 UTC m=+0.137582730 container init 49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:05:02 compute-0 podman[95842]: 2025-12-09 12:05:02.636759757 +0000 UTC m=+0.146336128 container start 49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Dec 09 12:05:02 compute-0 podman[95842]: 2025-12-09 12:05:02.640326562 +0000 UTC m=+0.149902943 container attach 49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_merkle, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:05:02 compute-0 vibrant_merkle[95858]: 167 167
Dec 09 12:05:02 compute-0 systemd[1]: libpod-49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1.scope: Deactivated successfully.
Dec 09 12:05:02 compute-0 conmon[95858]: conmon 49a6890fbed55effa679 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1.scope/container/memory.events
Dec 09 12:05:02 compute-0 podman[95842]: 2025-12-09 12:05:02.643207584 +0000 UTC m=+0.152783945 container died 49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6f1ca7c553c14306be5a8f31a0b660a776ef7d2ece107c56d9f282321327572-merged.mount: Deactivated successfully.
Dec 09 12:05:02 compute-0 podman[95842]: 2025-12-09 12:05:02.676212857 +0000 UTC m=+0.185789218 container remove 49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_merkle, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:05:02 compute-0 systemd[1]: libpod-conmon-49a6890fbed55effa6792418b2d854c571208e6a4e77740cf1cf8f4de5069bb1.scope: Deactivated successfully.
Dec 09 12:05:02 compute-0 podman[95881]: 2025-12-09 12:05:02.821229918 +0000 UTC m=+0.038885821 container create d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 09 12:05:02 compute-0 systemd[1]: Started libpod-conmon-d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0.scope.
Dec 09 12:05:02 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0899be77c203ed9e0b8744df59a4b5f847630ee37a0963a2cb78d7aedf24cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0899be77c203ed9e0b8744df59a4b5f847630ee37a0963a2cb78d7aedf24cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0899be77c203ed9e0b8744df59a4b5f847630ee37a0963a2cb78d7aedf24cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0899be77c203ed9e0b8744df59a4b5f847630ee37a0963a2cb78d7aedf24cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:02 compute-0 podman[95881]: 2025-12-09 12:05:02.8848594 +0000 UTC m=+0.102515333 container init d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 09 12:05:02 compute-0 podman[95881]: 2025-12-09 12:05:02.891257396 +0000 UTC m=+0.108913289 container start d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Dec 09 12:05:02 compute-0 podman[95881]: 2025-12-09 12:05:02.893854538 +0000 UTC m=+0.111510471 container attach d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec 09 12:05:02 compute-0 podman[95881]: 2025-12-09 12:05:02.804421126 +0000 UTC m=+0.022077059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:03 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:03 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:03 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:03 compute-0 lvm[95972]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:05:03 compute-0 lvm[95972]: VG ceph_vg0 finished
Dec 09 12:05:03 compute-0 festive_tu[95898]: {}
Dec 09 12:05:03 compute-0 systemd[1]: libpod-d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0.scope: Deactivated successfully.
Dec 09 12:05:03 compute-0 podman[95881]: 2025-12-09 12:05:03.600826442 +0000 UTC m=+0.818482335 container died d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tu, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:05:03 compute-0 systemd[1]: libpod-d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0.scope: Consumed 1.068s CPU time.
Dec 09 12:05:03 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 13 completed events
Dec 09 12:05:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:05:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0899be77c203ed9e0b8744df59a4b5f847630ee37a0963a2cb78d7aedf24cbc-merged.mount: Deactivated successfully.
Dec 09 12:05:03 compute-0 podman[95881]: 2025-12-09 12:05:03.923579936 +0000 UTC m=+1.141235839 container remove d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_tu, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 09 12:05:03 compute-0 systemd[1]: libpod-conmon-d9d9df9b65724062f249bdd75a9c3bfc9a6e9d989428d69278fe5419957d49e0.scope: Deactivated successfully.
Dec 09 12:05:03 compute-0 sudo[95778]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:05:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:05:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:03 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev bbec1e47-9172-4da0-b25c-c073770de7f7 (Updating mds.cephfs deployment (+3 -> 3))
Dec 09 12:05:03 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.optsue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 09 12:05:03 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.optsue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 09 12:05:04 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.optsue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 09 12:05:04 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:04 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:04 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.optsue on compute-2
Dec 09 12:05:04 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.optsue on compute-2
Dec 09 12:05:04 compute-0 ceph-mon[74388]: pgmap v14: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 218 B/s wr, 12 op/s
Dec 09 12:05:04 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:04 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:04 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:04 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.optsue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 09 12:05:04 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.optsue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 09 12:05:04 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:04 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v15: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 197 B/s wr, 11 op/s
Dec 09 12:05:05 compute-0 ceph-mon[74388]: Deploying daemon mds.cephfs.compute-2.optsue on compute-2
Dec 09 12:05:05 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:05 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:05 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:05:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:05:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 12:05:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.siaefs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 09 12:05:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.siaefs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 09 12:05:05 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.siaefs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 09 12:05:05 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:05 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:05 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.siaefs on compute-0
Dec 09 12:05:05 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.siaefs on compute-0
Dec 09 12:05:05 compute-0 sudo[95990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:05:05 compute-0 sudo[95990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:05 compute-0 sudo[95990]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:05 compute-0 sudo[96015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:05:05 compute-0 sudo[96015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e3 new map
Dec 09 12:05:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-09T12:05:06:129463+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:04:49.529449+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.optsue{-1:24187} state up:standby seq 1 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] up:boot
Dec 09 12:05:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] as mds.0
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.optsue assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 09 12:05:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.optsue"} v 0)
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.optsue"}]: dispatch
Dec 09 12:05:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e3 all = 0
Dec 09 12:05:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e4 new map
Dec 09 12:05:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-09T12:05:06:140748+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:05:06.140740+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.optsue{0:24187} state up:creating seq 1 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 09 12:05:06 compute-0 ceph-mon[74388]: pgmap v15: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 197 B/s wr, 11 op/s
Dec 09 12:05:06 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:06 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:06 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:06 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.siaefs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 09 12:05:06 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.siaefs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 09 12:05:06 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:creating}
Dec 09 12:05:06 compute-0 ceph-mon[74388]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.optsue is now active in filesystem cephfs as rank 0
Dec 09 12:05:06 compute-0 podman[96081]: 2025-12-09 12:05:06.355155607 +0000 UTC m=+0.036082982 container create aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:05:06 compute-0 systemd[1]: Started libpod-conmon-aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6.scope.
Dec 09 12:05:06 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:06 compute-0 podman[96081]: 2025-12-09 12:05:06.427063932 +0000 UTC m=+0.107991327 container init aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:05:06 compute-0 podman[96081]: 2025-12-09 12:05:06.433409815 +0000 UTC m=+0.114337210 container start aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 09 12:05:06 compute-0 podman[96081]: 2025-12-09 12:05:06.339753135 +0000 UTC m=+0.020680510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:06 compute-0 podman[96081]: 2025-12-09 12:05:06.436529415 +0000 UTC m=+0.117456830 container attach aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 09 12:05:06 compute-0 romantic_proskuriakova[96098]: 167 167
Dec 09 12:05:06 compute-0 systemd[1]: libpod-aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6.scope: Deactivated successfully.
Dec 09 12:05:06 compute-0 podman[96081]: 2025-12-09 12:05:06.438477034 +0000 UTC m=+0.119404409 container died aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 09 12:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fda28042a71d7852b6c70388cef4d9432cd9ce29fb9a0f706563ca6aa155d72-merged.mount: Deactivated successfully.
Dec 09 12:05:06 compute-0 podman[96081]: 2025-12-09 12:05:06.490251928 +0000 UTC m=+0.171179303 container remove aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 09 12:05:06 compute-0 systemd[1]: libpod-conmon-aeb32858f92dfc21ca455606ca88c78dae47d669f7889420a65f5a1468d5ecf6.scope: Deactivated successfully.
Dec 09 12:05:06 compute-0 systemd[1]: Reloading.
Dec 09 12:05:06 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v16: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 2 op/s
Dec 09 12:05:06 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:06 compute-0 systemd-rc-local-generator[96141]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:06 compute-0 systemd-sysv-generator[96144]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:06 compute-0 systemd[1]: Reloading.
Dec 09 12:05:06 compute-0 systemd-rc-local-generator[96185]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:06 compute-0 systemd-sysv-generator[96188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:07 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.siaefs for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:05:07 compute-0 ceph-mon[74388]: Deploying daemon mds.cephfs.compute-0.siaefs on compute-0
Dec 09 12:05:07 compute-0 ceph-mon[74388]: mds.? [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] up:boot
Dec 09 12:05:07 compute-0 ceph-mon[74388]: daemon mds.cephfs.compute-2.optsue assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 09 12:05:07 compute-0 ceph-mon[74388]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 09 12:05:07 compute-0 ceph-mon[74388]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 09 12:05:07 compute-0 ceph-mon[74388]: Cluster is now healthy
Dec 09 12:05:07 compute-0 ceph-mon[74388]: fsmap cephfs:0 1 up:standby
Dec 09 12:05:07 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.optsue"}]: dispatch
Dec 09 12:05:07 compute-0 ceph-mon[74388]: fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:creating}
Dec 09 12:05:07 compute-0 ceph-mon[74388]: daemon mds.cephfs.compute-2.optsue is now active in filesystem cephfs as rank 0
Dec 09 12:05:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e5 new map
Dec 09 12:05:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-09T12:05:07:155925+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:05:07.155923+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.optsue{0:24187} state up:active seq 2 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 09 12:05:07 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] up:active
Dec 09 12:05:07 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active}
Dec 09 12:05:07 compute-0 podman[96243]: 2025-12-09 12:05:07.286550931 +0000 UTC m=+0.038019611 container create 1eeb02c8aebc173b214578d9a7c56ebcd645548fd3a70a1768eb139d90c1c634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mds-cephfs-compute-0-siaefs, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5437d36aecbc33fd7cba3dbdcb0fd719032438f12ef7e9df6ccf4d123b9b55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5437d36aecbc33fd7cba3dbdcb0fd719032438f12ef7e9df6ccf4d123b9b55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5437d36aecbc33fd7cba3dbdcb0fd719032438f12ef7e9df6ccf4d123b9b55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5437d36aecbc33fd7cba3dbdcb0fd719032438f12ef7e9df6ccf4d123b9b55/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.siaefs supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:07 compute-0 podman[96243]: 2025-12-09 12:05:07.344829035 +0000 UTC m=+0.096297725 container init 1eeb02c8aebc173b214578d9a7c56ebcd645548fd3a70a1768eb139d90c1c634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mds-cephfs-compute-0-siaefs, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:05:07 compute-0 podman[96243]: 2025-12-09 12:05:07.350244726 +0000 UTC m=+0.101713406 container start 1eeb02c8aebc173b214578d9a7c56ebcd645548fd3a70a1768eb139d90c1c634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-mds-cephfs-compute-0-siaefs, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:05:07 compute-0 bash[96243]: 1eeb02c8aebc173b214578d9a7c56ebcd645548fd3a70a1768eb139d90c1c634
Dec 09 12:05:07 compute-0 podman[96243]: 2025-12-09 12:05:07.268577448 +0000 UTC m=+0.020046128 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:07 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.siaefs for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:05:07 compute-0 ceph-mds[96262]: set uid:gid to 167:167 (ceph:ceph)
Dec 09 12:05:07 compute-0 ceph-mds[96262]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec 09 12:05:07 compute-0 ceph-mds[96262]: main not setting numa affinity
Dec 09 12:05:07 compute-0 ceph-mds[96262]: pidfile_write: ignore empty --pid-file
Dec 09 12:05:07 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-mds-cephfs-compute-0-siaefs[96258]: starting mds.cephfs.compute-0.siaefs at 
Dec 09 12:05:07 compute-0 ceph-mds[96262]: mds.cephfs.compute-0.siaefs Updating MDS map to version 5 from mon.0
Dec 09 12:05:07 compute-0 sudo[96015]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:05:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:05:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 12:05:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.aqwtgn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 09 12:05:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.aqwtgn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 09 12:05:07 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:07 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:07 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:07.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:07 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.aqwtgn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 09 12:05:07 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:07 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:07 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.aqwtgn on compute-1
Dec 09 12:05:07 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.aqwtgn on compute-1
Dec 09 12:05:08 compute-0 ceph-mon[74388]: pgmap v16: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 2 op/s
Dec 09 12:05:08 compute-0 ceph-mon[74388]: mds.? [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] up:active
Dec 09 12:05:08 compute-0 ceph-mon[74388]: fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active}
Dec 09 12:05:08 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:08 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:08 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:08 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.aqwtgn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 09 12:05:08 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.aqwtgn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 09 12:05:08 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e6 new map
Dec 09 12:05:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-12-09T12:05:08:181821+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:05:07.155923+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.optsue{0:24187} state up:active seq 2 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.siaefs{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] compat {c=[1],r=[1],i=[1fff]}]
Dec 09 12:05:08 compute-0 ceph-mds[96262]: mds.cephfs.compute-0.siaefs Updating MDS map to version 6 from mon.0
Dec 09 12:05:08 compute-0 ceph-mds[96262]: mds.cephfs.compute-0.siaefs Monitors have assigned me to become a standby
Dec 09 12:05:08 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] up:boot
Dec 09 12:05:08 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 1 up:standby
Dec 09 12:05:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.siaefs"} v 0)
Dec 09 12:05:08 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.siaefs"}]: dispatch
Dec 09 12:05:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e6 all = 0
Dec 09 12:05:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e7 new map
Dec 09 12:05:08 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-12-09T12:05:08:205529+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:05:07.155923+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.optsue{0:24187} state up:active seq 2 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.siaefs{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] compat {c=[1],r=[1],i=[1fff]}]
Dec 09 12:05:08 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 1 up:standby
Dec 09 12:05:08 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v17: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 2 op/s
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev bbec1e47-9172-4da0-b25c-c073770de7f7 (Updating mds.cephfs deployment (+3 -> 3))
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event bbec1e47-9172-4da0-b25c-c073770de7f7 (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 842032d7-7ab0-4e1f-b683-a031a80effba (Updating nfs.cephfs deployment (+3 -> 3))
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.kdxrzl
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.kdxrzl
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: Deploying daemon mds.cephfs.compute-1.aqwtgn on compute-1
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mds.? [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] up:boot
Dec 09 12:05:09 compute-0 ceph-mon[74388]: fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 1 up:standby
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.siaefs"}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 1 up:standby
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 09 12:05:09 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e8 new map
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-12-09T12:05:09:212419+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:05:07.155923+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.optsue{0:24187} state up:active seq 2 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.siaefs{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.aqwtgn{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/3777069659,v1:192.168.122.101:6805/3777069659] compat {c=[1],r=[1],i=[1fff]}]
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3777069659,v1:192.168.122.101:6805/3777069659] up:boot
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 2 up:standby
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.aqwtgn"} v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.aqwtgn"}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e8 all = 0
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 09 12:05:09 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:09 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:09 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.kdxrzl-rgw
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.kdxrzl-rgw
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.kdxrzl's ganesha conf is defaulting to empty
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.kdxrzl's ganesha conf is defaulting to empty
Dec 09 12:05:09 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:09 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.kdxrzl on compute-1
Dec 09 12:05:09 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.kdxrzl on compute-1
Dec 09 12:05:10 compute-0 ceph-mon[74388]: pgmap v17: 167 pgs: 167 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 2 op/s
Dec 09 12:05:10 compute-0 ceph-mon[74388]: Creating key for client.nfs.cephfs.0.0.compute-1.kdxrzl
Dec 09 12:05:10 compute-0 ceph-mon[74388]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 09 12:05:10 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 09 12:05:10 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:10 compute-0 ceph-mon[74388]: mds.? [v2:192.168.122.101:6804/3777069659,v1:192.168.122.101:6805/3777069659] up:boot
Dec 09 12:05:10 compute-0 ceph-mon[74388]: fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 2 up:standby
Dec 09 12:05:10 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.aqwtgn"}]: dispatch
Dec 09 12:05:10 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 09 12:05:10 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 09 12:05:10 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:05:10 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.kdxrzl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:05:10 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e9 new map
Dec 09 12:05:10 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2025-12-09T12:05:10:483468+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:05:10.184214+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.optsue{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.siaefs{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.aqwtgn{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/3777069659,v1:192.168.122.101:6805/3777069659] compat {c=[1],r=[1],i=[1fff]}]
Dec 09 12:05:10 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] up:active
Dec 09 12:05:10 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 2 up:standby
Dec 09 12:05:10 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v18: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 6 op/s
Dec 09 12:05:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:05:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:05:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 09 12:05:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:11 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.seatck
Dec 09 12:05:11 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.seatck
Dec 09 12:05:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 09 12:05:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 09 12:05:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 09 12:05:11 compute-0 ceph-mgr[74679]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 09 12:05:11 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 09 12:05:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 09 12:05:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 09 12:05:11 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 09 12:05:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:11 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:11 compute-0 ceph-mon[74388]: Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:11 compute-0 ceph-mon[74388]: Creating key for client.nfs.cephfs.0.0.compute-1.kdxrzl-rgw
Dec 09 12:05:11 compute-0 ceph-mon[74388]: Bind address in nfs.cephfs.0.0.compute-1.kdxrzl's ganesha conf is defaulting to empty
Dec 09 12:05:11 compute-0 ceph-mon[74388]: Deploying daemon nfs.cephfs.0.0.compute-1.kdxrzl on compute-1
Dec 09 12:05:11 compute-0 ceph-mon[74388]: mds.? [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] up:active
Dec 09 12:05:11 compute-0 ceph-mon[74388]: fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 2 up:standby
Dec 09 12:05:11 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:11 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:11 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:11 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 09 12:05:11 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 09 12:05:11 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 09 12:05:11 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 09 12:05:11 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:11 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:11 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000036s ======
Dec 09 12:05:11 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:11.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Dec 09 12:05:11 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e10 new map
Dec 09 12:05:12 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           btime 2025-12-09T12:05:12:231363+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:05:10.184214+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.optsue{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.siaefs{-1:14568} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.aqwtgn{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/3777069659,v1:192.168.122.101:6805/3777069659] compat {c=[1],r=[1],i=[1fff]}]
Dec 09 12:05:12 compute-0 ceph-mds[96262]: mds.cephfs.compute-0.siaefs Updating MDS map to version 10 from mon.0
Dec 09 12:05:12 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] up:standby
Dec 09 12:05:12 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 2 up:standby
Dec 09 12:05:12 compute-0 ceph-mon[74388]: pgmap v18: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 6 op/s
Dec 09 12:05:12 compute-0 ceph-mon[74388]: Creating key for client.nfs.cephfs.1.0.compute-2.seatck
Dec 09 12:05:12 compute-0 ceph-mon[74388]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 09 12:05:12 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v19: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1.3 KiB/s wr, 4 op/s
Dec 09 12:05:13 compute-0 ceph-mon[74388]: mds.? [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] up:standby
Dec 09 12:05:13 compute-0 ceph-mon[74388]: fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 2 up:standby
Dec 09 12:05:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e11 new map
Dec 09 12:05:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).mds e11 print_map
                                           e11
                                           btime 2025-12-09T12:05:13:253815+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-09T12:04:49.529448+0000
                                           modified        2025-12-09T12:05:10.184214+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.optsue{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2866142313,v1:192.168.122.102:6805/2866142313] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.siaefs{-1:14568} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2445848900,v1:192.168.122.100:6807/2445848900] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.aqwtgn{-1:24176} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3777069659,v1:192.168.122.101:6805/3777069659] compat {c=[1],r=[1],i=[1fff]}]
Dec 09 12:05:13 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3777069659,v1:192.168.122.101:6805/3777069659] up:standby
Dec 09 12:05:13 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 2 up:standby
Dec 09 12:05:13 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:13 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000035s ======
Dec 09 12:05:13 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:13.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Dec 09 12:05:13 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 14 completed events
Dec 09 12:05:13 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:05:13 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 09 12:05:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 09 12:05:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.seatck-rgw
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.seatck-rgw
Dec 09 12:05:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 09 12:05:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:05:14 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.seatck's ganesha conf is defaulting to empty
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.seatck's ganesha conf is defaulting to empty
Dec 09 12:05:14 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:14 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.seatck on compute-2
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.seatck on compute-2
Dec 09 12:05:14 compute-0 ceph-mon[74388]: pgmap v19: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1.3 KiB/s wr, 4 op/s
Dec 09 12:05:14 compute-0 ceph-mon[74388]: mds.? [v2:192.168.122.101:6804/3777069659,v1:192.168.122.101:6805/3777069659] up:standby
Dec 09 12:05:14 compute-0 ceph-mon[74388]: fsmap cephfs:1 {0=cephfs.compute-2.optsue=up:active} 2 up:standby
Dec 09 12:05:14 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:14 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 09 12:05:14 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 09 12:05:14 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:05:14 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.seatck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:05:14 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:14 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v20: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.9 KiB/s wr, 5 op/s
Dec 09 12:05:15 compute-0 ceph-mon[74388]: Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:15 compute-0 ceph-mon[74388]: Creating key for client.nfs.cephfs.1.0.compute-2.seatck-rgw
Dec 09 12:05:15 compute-0 ceph-mon[74388]: Bind address in nfs.cephfs.1.0.compute-2.seatck's ganesha conf is defaulting to empty
Dec 09 12:05:15 compute-0 ceph-mon[74388]: Deploying daemon nfs.cephfs.1.0.compute-2.seatck on compute-2
Dec 09 12:05:15 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:15 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000035s ======
Dec 09 12:05:15 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Dec 09 12:05:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:05:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:05:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 09 12:05:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:15 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.mbjryf
Dec 09 12:05:15 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.mbjryf
Dec 09 12:05:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 09 12:05:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 09 12:05:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 09 12:05:15 compute-0 ceph-mgr[74679]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 09 12:05:15 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 09 12:05:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 09 12:05:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 09 12:05:15 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 09 12:05:15 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:15 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:16 compute-0 ceph-mon[74388]: pgmap v20: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.9 KiB/s wr, 5 op/s
Dec 09 12:05:16 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:16 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:16 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:16 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 09 12:05:16 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 09 12:05:16 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 09 12:05:16 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 09 12:05:16 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:16 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v21: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.9 KiB/s wr, 5 op/s
Dec 09 12:05:16 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:17 compute-0 ceph-mon[74388]: Creating key for client.nfs.cephfs.2.0.compute-0.mbjryf
Dec 09 12:05:17 compute-0 ceph-mon[74388]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 09 12:05:17 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:17 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:17 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:17.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:18 compute-0 ceph-mon[74388]: pgmap v21: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.9 KiB/s wr, 5 op/s
Dec 09 12:05:18 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v22: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.9 KiB/s wr, 5 op/s
Dec 09 12:05:18 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:05:18 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:05:18 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:05:18 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:05:18 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:05:18 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:05:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 09 12:05:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 09 12:05:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 09 12:05:19 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:19 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:19 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.mbjryf-rgw
Dec 09 12:05:19 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.mbjryf-rgw
Dec 09 12:05:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 09 12:05:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:05:19 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:05:19 compute-0 ceph-mgr[74679]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.mbjryf's ganesha conf is defaulting to empty
Dec 09 12:05:19 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.mbjryf's ganesha conf is defaulting to empty
Dec 09 12:05:19 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 09 12:05:19 compute-0 ceph-mon[74388]: log_channel(audit) log [DBG] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:19 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.mbjryf on compute-0
Dec 09 12:05:19 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.mbjryf on compute-0
Dec 09 12:05:19 compute-0 sudo[96390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:05:19 compute-0 sudo[96390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:19 compute-0 sudo[96390]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:19 compute-0 sudo[96415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:05:19 compute-0 sudo[96415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:19 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 09 12:05:19 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 09 12:05:19 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 09 12:05:19 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.mbjryf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 09 12:05:19 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 09 12:05:19 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:19 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000035s ======
Dec 09 12:05:19 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Dec 09 12:05:19 compute-0 podman[96483]: 2025-12-09 12:05:19.727835431 +0000 UTC m=+0.044527641 container create 7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_driscoll, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 09 12:05:19 compute-0 systemd[1]: Started libpod-conmon-7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae.scope.
Dec 09 12:05:19 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:19 compute-0 podman[96483]: 2025-12-09 12:05:19.705728672 +0000 UTC m=+0.022420902 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:19 compute-0 podman[96483]: 2025-12-09 12:05:19.801637461 +0000 UTC m=+0.118329701 container init 7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_driscoll, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 09 12:05:19 compute-0 podman[96483]: 2025-12-09 12:05:19.809029692 +0000 UTC m=+0.125721902 container start 7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 12:05:19 compute-0 lucid_driscoll[96499]: 167 167
Dec 09 12:05:19 compute-0 systemd[1]: libpod-7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae.scope: Deactivated successfully.
Dec 09 12:05:19 compute-0 podman[96483]: 2025-12-09 12:05:19.818311698 +0000 UTC m=+0.135003928 container attach 7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 09 12:05:19 compute-0 podman[96483]: 2025-12-09 12:05:19.819068475 +0000 UTC m=+0.135760685 container died 7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_driscoll, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 09 12:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fca84cc6da9433213ab05570d8bd48fdf9617faafa1bc079baa10d0b82a46347-merged.mount: Deactivated successfully.
Dec 09 12:05:19 compute-0 podman[96483]: 2025-12-09 12:05:19.850350238 +0000 UTC m=+0.167042448 container remove 7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 09 12:05:19 compute-0 systemd[1]: libpod-conmon-7b0abdac71572f6f92f648da3701ad10e3a34b5d975bee23263e780ed5632dae.scope: Deactivated successfully.
Dec 09 12:05:19 compute-0 systemd[1]: Reloading.
Dec 09 12:05:19 compute-0 systemd-sysv-generator[96546]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:19 compute-0 systemd-rc-local-generator[96543]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:20 compute-0 sshd-session[96551]: banner exchange: Connection from 3.134.148.59 port 33932: invalid format
Dec 09 12:05:20 compute-0 systemd[1]: Reloading.
Dec 09 12:05:20 compute-0 systemd-rc-local-generator[96584]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:20 compute-0 systemd-sysv-generator[96587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:20 compute-0 ceph-mon[74388]: pgmap v22: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.9 KiB/s wr, 5 op/s
Dec 09 12:05:20 compute-0 ceph-mon[74388]: Rados config object exists: conf-nfs.cephfs
Dec 09 12:05:20 compute-0 ceph-mon[74388]: Creating key for client.nfs.cephfs.2.0.compute-0.mbjryf-rgw
Dec 09 12:05:20 compute-0 ceph-mon[74388]: Bind address in nfs.cephfs.2.0.compute-0.mbjryf's ganesha conf is defaulting to empty
Dec 09 12:05:20 compute-0 ceph-mon[74388]: Deploying daemon nfs.cephfs.2.0.compute-0.mbjryf on compute-0
Dec 09 12:05:20 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.mbjryf for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:05:20 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v23: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2.7 KiB/s wr, 8 op/s
Dec 09 12:05:20 compute-0 podman[96638]: 2025-12-09 12:05:20.716481802 +0000 UTC m=+0.068048870 container create 6e772ae90ccfe2bd6ce8e5602f8c6b0dc032f6457fa91a1e9e3c134ce22936e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 09 12:05:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1b9c33b6eb3c69cb2997b59ec0e67a432ec96f891c6b8a2d87a327efd41b11/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1b9c33b6eb3c69cb2997b59ec0e67a432ec96f891c6b8a2d87a327efd41b11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1b9c33b6eb3c69cb2997b59ec0e67a432ec96f891c6b8a2d87a327efd41b11/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1b9c33b6eb3c69cb2997b59ec0e67a432ec96f891c6b8a2d87a327efd41b11/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.mbjryf-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:20 compute-0 podman[96638]: 2025-12-09 12:05:20.773259412 +0000 UTC m=+0.124826490 container init 6e772ae90ccfe2bd6ce8e5602f8c6b0dc032f6457fa91a1e9e3c134ce22936e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 09 12:05:20 compute-0 podman[96638]: 2025-12-09 12:05:20.778119154 +0000 UTC m=+0.129686212 container start 6e772ae90ccfe2bd6ce8e5602f8c6b0dc032f6457fa91a1e9e3c134ce22936e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 09 12:05:20 compute-0 bash[96638]: 6e772ae90ccfe2bd6ce8e5602f8c6b0dc032f6457fa91a1e9e3c134ce22936e6
Dec 09 12:05:20 compute-0 podman[96638]: 2025-12-09 12:05:20.696438895 +0000 UTC m=+0.048005973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 09 12:05:20 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.mbjryf for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:05:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:20 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 09 12:05:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:20 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 09 12:05:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:20 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 09 12:05:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:20 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 09 12:05:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:20 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 09 12:05:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:20 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 09 12:05:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:20 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 09 12:05:20 compute-0 sudo[96415]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:05:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:05:20 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:20 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 09 12:05:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 09 12:05:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:20 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 842032d7-7ab0-4e1f-b683-a031a80effba (Updating nfs.cephfs deployment (+3 -> 3))
Dec 09 12:05:20 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 842032d7-7ab0-4e1f-b683-a031a80effba (Updating nfs.cephfs deployment (+3 -> 3)) in 12 seconds
Dec 09 12:05:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 09 12:05:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:20 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev ee7bb843-d0d1-4e14-985c-07c78396d46e (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 09 12:05:20 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec 09 12:05:20 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:20 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.iqtreq on compute-1
Dec 09 12:05:20 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.iqtreq on compute-1
Dec 09 12:05:21 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:21 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:21 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:21 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:21 compute-0 ceph-mon[74388]: pgmap v23: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2.7 KiB/s wr, 8 op/s
Dec 09 12:05:21 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:21 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:21 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:21 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:21 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:21 compute-0 ceph-mon[74388]: Deploying daemon haproxy.nfs.cephfs.compute-1.iqtreq on compute-1
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 09 12:05:22 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:22 : epoch 69381080 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 09 12:05:22 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v24: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Dec 09 12:05:23 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:23 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:23 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:23.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:23 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 15 completed events
Dec 09 12:05:23 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:05:23 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:23 compute-0 ceph-mon[74388]: pgmap v24: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Dec 09 12:05:23 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:24 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v25: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 2.4 KiB/s wr, 8 op/s
Dec 09 12:05:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:05:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:05:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:25 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 09 12:05:25 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:25 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.aacrrf on compute-0
Dec 09 12:05:25 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.aacrrf on compute-0
Dec 09 12:05:25 compute-0 sudo[96708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:05:25 compute-0 sudo[96708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:25 compute-0 sudo[96708]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:25 compute-0 sudo[96733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:05:25 compute-0 sudo[96733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:25 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:25 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:25 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:25.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:25 compute-0 podman[96797]: 2025-12-09 12:05:25.569128585 +0000 UTC m=+0.040570420 container create d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e (image=quay.io/ceph/haproxy:2.3, name=cranky_feistel)
Dec 09 12:05:25 compute-0 systemd[1]: Started libpod-conmon-d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e.scope.
Dec 09 12:05:25 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:25 compute-0 podman[96797]: 2025-12-09 12:05:25.552371295 +0000 UTC m=+0.023813140 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 09 12:05:25 compute-0 podman[96797]: 2025-12-09 12:05:25.893782446 +0000 UTC m=+0.365224291 container init d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e (image=quay.io/ceph/haproxy:2.3, name=cranky_feistel)
Dec 09 12:05:25 compute-0 podman[96797]: 2025-12-09 12:05:25.902200183 +0000 UTC m=+0.373641998 container start d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e (image=quay.io/ceph/haproxy:2.3, name=cranky_feistel)
Dec 09 12:05:25 compute-0 cranky_feistel[96813]: 0 0
Dec 09 12:05:25 compute-0 systemd[1]: libpod-d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e.scope: Deactivated successfully.
Dec 09 12:05:25 compute-0 podman[96797]: 2025-12-09 12:05:25.944832496 +0000 UTC m=+0.416274331 container attach d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e (image=quay.io/ceph/haproxy:2.3, name=cranky_feistel)
Dec 09 12:05:25 compute-0 podman[96797]: 2025-12-09 12:05:25.946430952 +0000 UTC m=+0.417872767 container died d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e (image=quay.io/ceph/haproxy:2.3, name=cranky_feistel)
Dec 09 12:05:26 compute-0 ceph-mon[74388]: pgmap v25: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 2.4 KiB/s wr, 8 op/s
Dec 09 12:05:26 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:26 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:26 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:26 compute-0 ceph-mon[74388]: Deploying daemon haproxy.nfs.cephfs.compute-0.aacrrf on compute-0
Dec 09 12:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-da6784e12234535f6d1c8d3903a728a6f5255ed1215d40235b702875148274aa-merged.mount: Deactivated successfully.
Dec 09 12:05:26 compute-0 podman[96797]: 2025-12-09 12:05:26.292121005 +0000 UTC m=+0.763562820 container remove d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e (image=quay.io/ceph/haproxy:2.3, name=cranky_feistel)
Dec 09 12:05:26 compute-0 systemd[1]: libpod-conmon-d950078fa427be79f7e42450c668978e1dc9de5af7a055ecb31afbfd479b167e.scope: Deactivated successfully.
Dec 09 12:05:26 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:26 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:26 compute-0 systemd[1]: Reloading.
Dec 09 12:05:26 compute-0 systemd-rc-local-generator[96864]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:26 compute-0 systemd-sysv-generator[96867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:26 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v26: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 09 12:05:26 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:26 compute-0 systemd[1]: Reloading.
Dec 09 12:05:26 compute-0 systemd-rc-local-generator[96904]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:26 compute-0 systemd-sysv-generator[96908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:26 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.aacrrf for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:05:27 compute-0 podman[96961]: 2025-12-09 12:05:27.126175737 +0000 UTC m=+0.040067713 container create f2bab494745d0a22930e21737229bfc6c16059121c0957d7c230b134cf95c12a (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-nfs-cephfs-compute-0-aacrrf)
Dec 09 12:05:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5adadb92d694cda9557fddd05f4627ea473cf425a11d04fa1bc69c0ace01e0/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:27 compute-0 podman[96961]: 2025-12-09 12:05:27.180872096 +0000 UTC m=+0.094764092 container init f2bab494745d0a22930e21737229bfc6c16059121c0957d7c230b134cf95c12a (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-nfs-cephfs-compute-0-aacrrf)
Dec 09 12:05:27 compute-0 podman[96961]: 2025-12-09 12:05:27.185563261 +0000 UTC m=+0.099455237 container start f2bab494745d0a22930e21737229bfc6c16059121c0957d7c230b134cf95c12a (image=quay.io/ceph/haproxy:2.3, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-nfs-cephfs-compute-0-aacrrf)
Dec 09 12:05:27 compute-0 bash[96961]: f2bab494745d0a22930e21737229bfc6c16059121c0957d7c230b134cf95c12a
Dec 09 12:05:27 compute-0 podman[96961]: 2025-12-09 12:05:27.107773569 +0000 UTC m=+0.021665595 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 09 12:05:27 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.aacrrf for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:05:27 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-nfs-cephfs-compute-0-aacrrf[96976]: [NOTICE] 342/120527 (2) : New worker #1 (4) forked
Dec 09 12:05:27 compute-0 sudo[96733]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:05:27 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:05:27 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:27 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 09 12:05:27 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:27 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.zvbgbt on compute-2
Dec 09 12:05:27 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.zvbgbt on compute-2
Dec 09 12:05:27 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:27 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000036s ======
Dec 09 12:05:27 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:27.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Dec 09 12:05:28 compute-0 ceph-mon[74388]: pgmap v26: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 09 12:05:28 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:28 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:28 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:28 compute-0 ceph-mon[74388]: Deploying daemon haproxy.nfs.cephfs.compute-2.zvbgbt on compute-2
Dec 09 12:05:28 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:28 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:28 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:28 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5d8000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:28 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v27: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 09 12:05:29 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:29 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:29 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:30 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:30 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:30 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:30 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v28: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 09 12:05:30 compute-0 ceph-mon[74388]: pgmap v27: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 09 12:05:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:05:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:05:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 09 12:05:31 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:31 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:31 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec 09 12:05:31 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:31 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.hwobre on compute-2
Dec 09 12:05:31 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.hwobre on compute-2
Dec 09 12:05:31 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:31 compute-0 ceph-mon[74388]: pgmap v28: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 09 12:05:31 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:31 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:31 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:31 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:32 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:32 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:32 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v29: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 09 12:05:32 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:32 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:32 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:32 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:32 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:32 compute-0 ceph-mon[74388]: Deploying daemon keepalived.nfs.cephfs.compute-2.hwobre on compute-2
Dec 09 12:05:33 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:33 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:33 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:33 compute-0 ceph-mon[74388]: pgmap v29: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 09 12:05:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:34 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:34 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:34 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v30: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 09 12:05:34 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:34 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:35 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:35 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:35 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:35.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:36 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:36 compute-0 ceph-mon[74388]: pgmap v30: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 09 12:05:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:36 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v31: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 09 12:05:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 09 12:05:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:36 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 09 12:05:36 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.rwzywx on compute-1
Dec 09 12:05:36 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.rwzywx on compute-1
Dec 09 12:05:36 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:36 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:37 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:37 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000036s ======
Dec 09 12:05:37 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:37.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Dec 09 12:05:37 compute-0 ceph-mon[74388]: pgmap v31: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:37 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:37 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:37 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:37 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:37 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:37 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:37 compute-0 ceph-mon[74388]: Deploying daemon keepalived.nfs.cephfs.compute-1.rwzywx on compute-1
Dec 09 12:05:38 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:38 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:38 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:38 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:38 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v32: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:38 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:38 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:39 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:39 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000035s ======
Dec 09 12:05:39 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:39.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Dec 09 12:05:39 compute-0 ceph-mon[74388]: pgmap v32: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:40 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:40 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:40 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:40 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5d8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:40 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v33: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:40 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:40 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc0095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 09 12:05:41 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:41 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:41 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:41.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:41 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 09 12:05:41 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:41 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 09 12:05:41 compute-0 ceph-mon[74388]: pgmap v33: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:41 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:41 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:41 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:41 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:41 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:41 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:41 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:41 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:41 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.zhroot on compute-0
Dec 09 12:05:41 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.zhroot on compute-0
Dec 09 12:05:41 compute-0 sudo[96990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:05:41 compute-0 sudo[96990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:41 compute-0 sudo[96990]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:42 compute-0 sudo[97015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:05:42 compute-0 sudo[97015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:42 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:42 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:42 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:42 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:42 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v34: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:42 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:42 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5d8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:42 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:42 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:42 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 09 12:05:42 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 09 12:05:42 compute-0 ceph-mon[74388]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 09 12:05:42 compute-0 ceph-mon[74388]: Deploying daemon keepalived.nfs.cephfs.compute-0.zhroot on compute-0
Dec 09 12:05:43 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:43 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:43 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:43.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:43 compute-0 ceph-mon[74388]: pgmap v34: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:44 compute-0 sshd-session[97126]: Accepted publickey for zuul from 192.168.122.10 port 48152 ssh2: ECDSA SHA256:9TQybH6jbBrVcztEaDmRsG3ssVtaycQ7UiUr3v9GScY
Dec 09 12:05:44 compute-0 systemd-logind[799]: New session 37 of user zuul.
Dec 09 12:05:44 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 09 12:05:44 compute-0 sshd-session[97126]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 12:05:44 compute-0 sudo[97130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 09 12:05:44 compute-0 sudo[97130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 12:05:44 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:44 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc0095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:44 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:44 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:44 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v35: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 09 12:05:44 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:44 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:45 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:45 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:45 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:45.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:45 compute-0 podman[97078]: 2025-12-09 12:05:45.611307019 +0000 UTC m=+3.183182443 container create 64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577 (image=quay.io/ceph/keepalived:2.2.4, name=nifty_banzai, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Dec 09 12:05:45 compute-0 systemd[1]: Started libpod-conmon-64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577.scope.
Dec 09 12:05:45 compute-0 podman[97078]: 2025-12-09 12:05:45.595092691 +0000 UTC m=+3.166968155 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 09 12:05:45 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:45 compute-0 podman[97078]: 2025-12-09 12:05:45.70751288 +0000 UTC m=+3.279388354 container init 64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577 (image=quay.io/ceph/keepalived:2.2.4, name=nifty_banzai, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vendor=Red Hat, Inc.)
Dec 09 12:05:45 compute-0 podman[97078]: 2025-12-09 12:05:45.716818811 +0000 UTC m=+3.288694245 container start 64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577 (image=quay.io/ceph/keepalived:2.2.4, name=nifty_banzai, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, name=keepalived, distribution-scope=public, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vcs-type=git, vendor=Red Hat, Inc.)
Dec 09 12:05:45 compute-0 podman[97078]: 2025-12-09 12:05:45.720366692 +0000 UTC m=+3.292242136 container attach 64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577 (image=quay.io/ceph/keepalived:2.2.4, name=nifty_banzai, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, version=2.2.4, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, name=keepalived, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 12:05:45 compute-0 nifty_banzai[97254]: 0 0
Dec 09 12:05:45 compute-0 systemd[1]: libpod-64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577.scope: Deactivated successfully.
Dec 09 12:05:45 compute-0 podman[97078]: 2025-12-09 12:05:45.723026525 +0000 UTC m=+3.294901969 container died 64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577 (image=quay.io/ceph/keepalived:2.2.4, name=nifty_banzai, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph, com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, io.openshift.tags=Ceph keepalived)
Dec 09 12:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-512723bf72c06342147a8310e18a7385573bd4ae75853322f09fcaf2142f7299-merged.mount: Deactivated successfully.
Dec 09 12:05:45 compute-0 podman[97078]: 2025-12-09 12:05:45.780731951 +0000 UTC m=+3.352607395 container remove 64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577 (image=quay.io/ceph/keepalived:2.2.4, name=nifty_banzai, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, distribution-scope=public, name=keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9)
Dec 09 12:05:45 compute-0 systemd[1]: libpod-conmon-64f62853378db6206c7158679e0f3c7d46ef00bfe839336886f243a8af22a577.scope: Deactivated successfully.
Dec 09 12:05:45 compute-0 systemd[1]: Reloading.
Dec 09 12:05:45 compute-0 systemd-sysv-generator[97325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:45 compute-0 systemd-rc-local-generator[97322]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:46 compute-0 ceph-mon[74388]: pgmap v35: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 09 12:05:46 compute-0 systemd[1]: Reloading.
Dec 09 12:05:46 compute-0 systemd-sysv-generator[97380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:46 compute-0 systemd-rc-local-generator[97374]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:46 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5d8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:46 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.zhroot for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:46 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:46 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v36: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:46 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:46 compute-0 podman[97491]: 2025-12-09 12:05:46.791920137 +0000 UTC m=+0.038990162 container create 0bc793180175574622118f747f449be3da7bb7e36eba8f3c4ebdccafe004383d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-type=git, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4)
Dec 09 12:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ba79489824b35829f74a99ce63a20a1897b42111ef51f6fe36e7387f1313a5/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:46 compute-0 podman[97491]: 2025-12-09 12:05:46.837621036 +0000 UTC m=+0.084691091 container init 0bc793180175574622118f747f449be3da7bb7e36eba8f3c4ebdccafe004383d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, release=1793, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public)
Dec 09 12:05:46 compute-0 podman[97491]: 2025-12-09 12:05:46.842226051 +0000 UTC m=+0.089296076 container start 0bc793180175574622118f747f449be3da7bb7e36eba8f3c4ebdccafe004383d (image=quay.io/ceph/keepalived:2.2.4, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot, version=2.2.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, distribution-scope=public, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container)
Dec 09 12:05:46 compute-0 bash[97491]: 0bc793180175574622118f747f449be3da7bb7e36eba8f3c4ebdccafe004383d
Dec 09 12:05:46 compute-0 podman[97491]: 2025-12-09 12:05:46.774896464 +0000 UTC m=+0.021966509 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 09 12:05:46 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.zhroot for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: Running on Linux 5.14.0-648.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 (built for Linux 5.14.0)
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: Starting VRRP child process, pid=4
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: Startup complete
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: (VI_0) Entering BACKUP STATE (init)
Dec 09 12:05:46 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:46 2025: VRRP_Script(check_backend) succeeded
Dec 09 12:05:46 compute-0 sudo[97015]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:46 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:05:47 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:47 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:05:47 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:47 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 09 12:05:47 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:47 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev ee7bb843-d0d1-4e14-985c-07c78396d46e (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 09 12:05:47 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event ee7bb843-d0d1-4e14-985c-07c78396d46e (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 26 seconds
Dec 09 12:05:47 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 09 12:05:47 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:47 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 3c1ea544-72e7-4adc-805b-337fd1b94cc1 (Updating alertmanager deployment (+1 -> 1))
Dec 09 12:05:47 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec 09 12:05:47 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec 09 12:05:47 compute-0 sudo[97520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:05:47 compute-0 sudo[97520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:47 compute-0 sudo[97520]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:47 compute-0 sudo[97545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:05:47 compute-0 sudo[97545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:47 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:47 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:47 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:48 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:48 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v37: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [balancer INFO root] Optimize plan auto_2025-12-09_12:05:48
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [balancer INFO root] do_upmap
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'images', '.nfs']
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [balancer INFO root] prepared 0/10 upmap changes
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] _maybe_adjust
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec 09 12:05:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:05:48 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] scanning for idle connections..
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [volumes INFO mgr_util] cleaning up connections: []
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 12:05:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 16 completed events
Dec 09 12:05:48 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 09 12:05:48 compute-0 ceph-mgr[74679]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 09 12:05:48 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:48 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:49 compute-0 ceph-mon[74388]: pgmap v36: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:49 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:49 compute-0 ceph-mon[74388]: Deploying daemon alertmanager.compute-0 on compute-0
Dec 09 12:05:49 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:49 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000032s ======
Dec 09 12:05:49 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.82703041 +0000 UTC m=+2.154615959 volume create e6af039ac5ed817beded6d4b05389fe6a9ebeca54c1db7ca6f43d2ddf6152647
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.836487187 +0000 UTC m=+2.164072736 container create 8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:49 compute-0 systemd[1]: Started libpod-conmon-8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699.scope.
Dec 09 12:05:49 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34dbf4e4ee037e5c922264bc2da800f186d1addb4a6ae6772cb6385b2f9ea377/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.811398921 +0000 UTC m=+2.138984490 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.912764274 +0000 UTC m=+2.240349843 container init 8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.921193338 +0000 UTC m=+2.248778887 container start 8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.924623205 +0000 UTC m=+2.252208784 container attach 8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:49 compute-0 intelligent_lalande[97755]: 65534 65534
Dec 09 12:05:49 compute-0 systemd[1]: libpod-8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699.scope: Deactivated successfully.
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.926048389 +0000 UTC m=+2.253633948 container died 8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-34dbf4e4ee037e5c922264bc2da800f186d1addb4a6ae6772cb6385b2f9ea377-merged.mount: Deactivated successfully.
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.964654807 +0000 UTC m=+2.292240356 container remove 8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:49 compute-0 podman[97612]: 2025-12-09 12:05:49.969799978 +0000 UTC m=+2.297385547 volume remove e6af039ac5ed817beded6d4b05389fe6a9ebeca54c1db7ca6f43d2ddf6152647
Dec 09 12:05:49 compute-0 systemd[1]: libpod-conmon-8fce23566429f8e24d2982300f3060678f88afc73d8c7c27060cd1821f4f1699.scope: Deactivated successfully.
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.037805227 +0000 UTC m=+0.043734570 volume create f9dfeeac2dc6cf2ce30485fb95703d50d20178b2bad411ce233b40904c5f5df4
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.046310733 +0000 UTC m=+0.052240076 container create c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:50 compute-0 systemd[1]: Started libpod-conmon-c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d.scope.
Dec 09 12:05:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f593e6338c4ed3dee8c0fb390b593b2b5f1ba95bb9fee4ecea25cdfdc877ab/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.11745496 +0000 UTC m=+0.123384323 container init c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.025382989 +0000 UTC m=+0.031312352 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.124248272 +0000 UTC m=+0.130177615 container start c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:50 compute-0 strange_wu[97789]: 65534 65534
Dec 09 12:05:50 compute-0 systemd[1]: libpod-c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d.scope: Deactivated successfully.
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.12737744 +0000 UTC m=+0.133306783 container attach c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.12769385 +0000 UTC m=+0.133623193 container died c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-03f593e6338c4ed3dee8c0fb390b593b2b5f1ba95bb9fee4ecea25cdfdc877ab-merged.mount: Deactivated successfully.
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.161978273 +0000 UTC m=+0.167907616 container remove c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_wu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:50 compute-0 podman[97772]: 2025-12-09 12:05:50.164869684 +0000 UTC m=+0.170799047 volume remove f9dfeeac2dc6cf2ce30485fb95703d50d20178b2bad411ce233b40904c5f5df4
Dec 09 12:05:50 compute-0 systemd[1]: libpod-conmon-c1d4b6e810b0fad787155870f6fd2d40fbeb1015477e8af3dab6d408c89c5d7d.scope: Deactivated successfully.
Dec 09 12:05:50 compute-0 systemd[1]: Reloading.
Dec 09 12:05:50 compute-0 systemd-rc-local-generator[97844]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:50 compute-0 systemd-sysv-generator[97847]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 09 12:05:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:50 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 09 12:05:50 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 9db9d74e-2fe1-4c6a-820b-6f0e8d5be6aa (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 09 12:05:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:05:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:50 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:50 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:50 compute-0 ceph-mon[74388]: pgmap v37: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 09 12:05:50 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:50 compute-0 ovs-vsctl[97867]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 09 12:05:50 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-keepalived-nfs-cephfs-compute-0-zhroot[97506]: Tue Dec  9 12:05:50 2025: (VI_0) Entering MASTER STATE
Dec 09 12:05:50 compute-0 systemd[1]: Reloading.
Dec 09 12:05:50 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:50 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:50 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v39: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec 09 12:05:50 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:05:50 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:50 compute-0 systemd-rc-local-generator[97926]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:05:50 compute-0 systemd-sysv-generator[97929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 12:05:50 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:50 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:50 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732...
Dec 09 12:05:51 compute-0 podman[97993]: 2025-12-09 12:05:51.026968003 +0000 UTC m=+0.037445163 volume create 51d179351513de58369bd4e6867eea2466a683fee921a61ead8386cfd6040cb1
Dec 09 12:05:51 compute-0 podman[97993]: 2025-12-09 12:05:51.037767851 +0000 UTC m=+0.048245001 container create 261d72bda626930d365f95b461af24c788e70517baf07a235542ff28df8c1c01 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b0c4e7a3fbb8b1c4787774681b498a96efaafc865be9affc26e25ed134e566/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b0c4e7a3fbb8b1c4787774681b498a96efaafc865be9affc26e25ed134e566/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 09 12:05:51 compute-0 podman[97993]: 2025-12-09 12:05:51.09589935 +0000 UTC m=+0.106376530 container init 261d72bda626930d365f95b461af24c788e70517baf07a235542ff28df8c1c01 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:51 compute-0 podman[97993]: 2025-12-09 12:05:51.100945487 +0000 UTC m=+0.111422637 container start 261d72bda626930d365f95b461af24c788e70517baf07a235542ff28df8c1c01 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 12:05:51 compute-0 bash[97993]: 261d72bda626930d365f95b461af24c788e70517baf07a235542ff28df8c1c01
Dec 09 12:05:51 compute-0 podman[97993]: 2025-12-09 12:05:51.011750476 +0000 UTC m=+0.022227646 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 09 12:05:51 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 750b57e3-924f-51a5-ab09-01517535f732.
Dec 09 12:05:51 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:51.128Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 09 12:05:51 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:51.128Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 09 12:05:51 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:51.143Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 09 12:05:51 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:51.146Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 09 12:05:51 compute-0 sudo[97545]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 09 12:05:51 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:51.183Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 09 12:05:51 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:51.184Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 09 12:05:51 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:51.192Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 09 12:05:51 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:51.192Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 3c1ea544-72e7-4adc-805b-337fd1b94cc1 (Updating alertmanager deployment (+1 -> 1))
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 3c1ea544-72e7-4adc-805b-337fd1b94cc1 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 9f897e75-058a-4043-b10f-47ec68aedb73 (Updating grafana deployment (+1 -> 1))
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 09 12:05:51 compute-0 sudo[98160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 09 12:05:51 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 464bd008-10c9-4dfc-980e-eb6b5319bdd5 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:51 compute-0 sudo[98160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:51 compute-0 sudo[98160]: pam_unix(sudo:session): session closed for user root
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: osdmap e49: 3 total, 3 up, 3 in
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:51 compute-0 ceph-mon[74388]: pgmap v39: 167 pgs: 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: Regenerating cephadm self-signed grafana TLS certificates
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 09 12:05:51 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:51 compute-0 ceph-mon[74388]: Deploying daemon grafana.compute-0 on compute-0
Dec 09 12:05:51 compute-0 sudo[98212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/750b57e3-924f-51a5-ab09-01517535f732/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 750b57e3-924f-51a5-ab09-01517535f732
Dec 09 12:05:51 compute-0 sudo[98212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 09 12:05:51 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:51 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:51 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:51 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:51 compute-0 lvm[98387]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 09 12:05:51 compute-0 lvm[98387]: VG ceph_vg0 finished
Dec 09 12:05:52 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:52 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 09 12:05:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 09 12:05:52 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 09 12:05:52 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 4ed3412b-9726-4314-a090-25b872508cef (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 09 12:05:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:05:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:52 compute-0 ceph-mon[74388]: osdmap e50: 3 total, 3 up, 3 in
Dec 09 12:05:52 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:52 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:52 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:52 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v42: 198 pgs: 31 unknown, 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:05:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:05:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:52 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:05:52 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:52 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:52 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:53 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-alertmanager-compute-0[98009]: ts=2025-12-09T12:05:53.146Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000105804s
Dec 09 12:05:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 09 12:05:53 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:53 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:53 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:53.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:53 compute-0 crontab[98918]: (root) LIST (root)
Dec 09 12:05:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 09 12:05:53 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 09 12:05:53 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 4a5a8094-8895-4343-8347-9c76221e7158 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 09 12:05:53 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:05:53 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:54 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:54 compute-0 ceph-mon[74388]: osdmap e51: 3 total, 3 up, 3 in
Dec 09 12:05:54 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:54 compute-0 ceph-mon[74388]: pgmap v42: 198 pgs: 31 unknown, 167 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:05:54 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:54 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:54 compute-0 ceph-mon[74388]: 7.16 scrub starts
Dec 09 12:05:54 compute-0 ceph-mon[74388]: 7.16 scrub ok
Dec 09 12:05:54 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:54 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:54 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:54 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:54 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v44: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 488 B/s rd, 0 op/s
Dec 09 12:05:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:05:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:54 compute-0 sshd-session[99024]: banner exchange: Connection from 3.134.148.59 port 35856: invalid format
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 52 pg[8.0( v 34'12 (0'0,34'12] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=52 pruub=10.865182877s) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 34'11 mlcod 34'11 active pruub 180.726196289s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 52 pg[9.0( v 48'1080 (0'0,48'1080] local-lis/les=35/36 n=178 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=52 pruub=12.886552811s) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 48'1079 mlcod 48'1079 active pruub 182.747817993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 52 pg[8.0( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=52 pruub=10.865182877s) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 34'11 mlcod 0'0 unknown pruub 180.726196289s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x55733953bb00) operator()   moving buffer(0x55733841efc8 space 0x5573382811f0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x55733953bb00) operator()   moving buffer(0x55733846cca8 space 0x5573381ab870 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x55733953bb00) operator()   moving buffer(0x55733846cf28 space 0x5573383631f0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:54 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 52 pg[9.0( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=52 pruub=12.886552811s) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 48'1079 mlcod 0'0 unknown pruub 182.747817993s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384ac708 space 0x55733819def0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384a5068 space 0x5573384b0eb0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338465248 space 0x557338294d10 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573381ed608 space 0x5573383e3050 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846de28 space 0x557338337460 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338231d88 space 0x5573383e2de0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846d888 space 0x5573381aaaa0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338479ce8 space 0x557337865c80 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338488e88 space 0x5573384b1050 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846cfc8 space 0x5573384b0de0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846c3e8 space 0x5573384b1390 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384add88 space 0x5573383e3530 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384788e8 space 0x5573384b1120 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733847fc48 space 0x5573384b12c0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384ad388 space 0x5573381ab390 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846da68 space 0x557338247390 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846c708 space 0x5573382d1120 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338478348 space 0x5573383e2c40 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338478ca8 space 0x557338265d50 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846dd88 space 0x5573383517a0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338465888 space 0x557338247050 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338465d88 space 0x5573384b04f0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384a5a68 space 0x5573384b0f80 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846a8e8 space 0x557337865870 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384ace88 space 0x557338363600 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x557338464168 space 0x5573384b11f0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573381dd9c8 space 0x5573384b1460 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384654c8 space 0x55733818a1b0 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384796a8 space 0x557338281940 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x55733846d748 space 0x5573383e3600 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5573397d2d80) operator()   moving buffer(0x5573384791a8 space 0x557338336760 0x0~1000 clean)
Dec 09 12:05:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 09 12:05:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 09 12:05:54 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 09 12:05:54 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev aea1021c-7b0b-49aa-a2f7-45553c06e28b (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.16( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.7( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.17( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.6( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.14( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.14( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.15( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.15( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.17( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.16( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.11( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.10( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.10( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.11( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.3( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.2( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.2( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.3( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.f( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.e( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.9( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.8( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.8( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.9( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.b( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.a( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.f( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.e( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.d( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.c( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.c( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.d( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.a( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.b( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1( v 34'12 (0'0,34'12] local-lis/les=33/34 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.6( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.7( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.5( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.4( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.5( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.4( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1a( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1b( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1b( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1a( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.18( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.19( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.18( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.19( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1f( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1e( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1f( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1e( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1c( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1d( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1c( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1d( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.12( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.13( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.13( v 48'1080 lc 0'0 (0'0,48'1080] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.12( v 34'12 lc 0'0 (0'0,34'12] local-lis/les=33/34 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:54 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.6( v 34'12 (0'0,34'12] local-lis/les=52/53 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.16( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.7( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.17( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.14( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.15( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.14( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.16( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.17( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.11( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.10( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.11( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.10( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.3( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.2( v 34'12 (0'0,34'12] local-lis/les=52/53 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.2( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.3( v 34'12 (0'0,34'12] local-lis/les=52/53 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.f( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.9( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.e( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.8( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.a( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.9( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.b( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.8( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.f( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.e( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.d( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.d( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.c( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.a( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.c( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1( v 34'12 (0'0,34'12] local-lis/les=52/53 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.b( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.0( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 48'1079 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.0( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 34'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.7( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.6( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.4( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.15( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.5( v 34'12 (0'0,34'12] local-lis/les=52/53 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.5( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1b( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1a( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.4( v 34'12 (0'0,34'12] local-lis/les=52/53 n=1 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1b( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1a( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.18( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.19( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.19( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1e( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1f( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1f( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.18( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1e( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1c( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1c( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.1d( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.12( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.1d( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[9.13( v 48'1080 (0'0,48'1080] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [1] r=0 lpr=52 pi=[35,52)/1 crt=48'1080 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.13( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:54 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 53 pg[8.12( v 34'12 (0'0,34'12] local-lis/les=52/53 n=0 ec=52/33 lis/c=33/33 les/c/f=34/34/0 sis=52) [1] r=0 lpr=52 pi=[33,52)/1 crt=34'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 09 12:05:55 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-haproxy-nfs-cephfs-compute-0-aacrrf[96976]: [WARNING] 342/120555 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 09 12:05:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:55 compute-0 ceph-mon[74388]: osdmap e52: 3 total, 3 up, 3 in
Dec 09 12:05:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:55 compute-0 ceph-mon[74388]: 7.4 scrub starts
Dec 09 12:05:55 compute-0 ceph-mon[74388]: 7.4 scrub ok
Dec 09 12:05:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:55 compute-0 ceph-mon[74388]: osdmap e53: 3 total, 3 up, 3 in
Dec 09 12:05:55 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 09 12:05:55 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 09 12:05:55 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 17 completed events
Dec 09 12:05:55 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:05:55 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 09 12:05:55 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:55 compute-0 ceph-mgr[74679]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Dec 09 12:05:55 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:55 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.001000031s ======
Dec 09 12:05:55 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 09 12:05:55 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 09 12:05:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 09 12:05:56 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] update: starting ev 47dcda8d-d584-4270-ab82-48a01dc7057e (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 9db9d74e-2fe1-4c6a-820b-6f0e8d5be6aa (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 9db9d74e-2fe1-4c6a-820b-6f0e8d5be6aa (PG autoscaler increasing pool 7 PGs from 1 to 32) in 6 seconds
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 464bd008-10c9-4dfc-980e-eb6b5319bdd5 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 464bd008-10c9-4dfc-980e-eb6b5319bdd5 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 4ed3412b-9726-4314-a090-25b872508cef (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 4ed3412b-9726-4314-a090-25b872508cef (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 4a5a8094-8895-4343-8347-9c76221e7158 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 4a5a8094-8895-4343-8347-9c76221e7158 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev aea1021c-7b0b-49aa-a2f7-45553c06e28b (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event aea1021c-7b0b-49aa-a2f7-45553c06e28b (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] complete: finished ev 47dcda8d-d584-4270-ab82-48a01dc7057e (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: [progress INFO root] Completed event 47dcda8d-d584-4270-ab82-48a01dc7057e (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec 09 12:05:56 compute-0 ceph-mon[74388]: pgmap v44: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 488 B/s rd, 0 op/s
Dec 09 12:05:56 compute-0 ceph-mon[74388]: 7.1d scrub starts
Dec 09 12:05:56 compute-0 ceph-mon[74388]: 7.1d scrub ok
Dec 09 12:05:56 compute-0 ceph-mon[74388]: 9.16 scrub starts
Dec 09 12:05:56 compute-0 ceph-mon[74388]: 9.16 scrub ok
Dec 09 12:05:56 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:05:56 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 09 12:05:56 compute-0 ceph-mon[74388]: osdmap e54: 3 total, 3 up, 3 in
Dec 09 12:05:56 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Dec 09 12:05:56 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Dec 09 12:05:56 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:56 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:56 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:56 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:56 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v47: 291 pgs: 93 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 498 B/s rd, 0 op/s
Dec 09 12:05:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:05:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:05:56 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:56 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 09 12:05:56 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:56 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5e4004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:56 compute-0 systemd[1]: Starting Hostname Service...
Dec 09 12:05:57 compute-0 systemd[1]: Started Hostname Service.
Dec 09 12:05:57 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 09 12:05:57 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 09 12:05:57 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 09 12:05:57 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:57 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:57 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:57.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:58 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:58 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:58 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 09 12:05:58 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 09 12:05:58 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:58 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:58 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v48: 291 pgs: 93 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 09 12:05:58 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:58 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5dc000ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:58 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:05:58 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5fc00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:05:59 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 09 12:05:59 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 09 12:05:59 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Dec 09 12:05:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:05:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 09 12:05:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:59 compute-0 radosgw[89472]: ====== starting new request req=0x7fb91647e5d0 =====
Dec 09 12:05:59 compute-0 radosgw[89472]: ====== req done req=0x7fb91647e5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 09 12:05:59 compute-0 radosgw[89472]: beast: 0x7fb91647e5d0: 192.168.122.100 - anonymous [09/Dec/2025:12:05:59.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 09 12:05:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 09 12:05:59 compute-0 ceph-mon[74388]: 7.12 scrub starts
Dec 09 12:05:59 compute-0 ceph-mon[74388]: 7.12 scrub ok
Dec 09 12:05:59 compute-0 ceph-mon[74388]: 9.7 deep-scrub starts
Dec 09 12:05:59 compute-0 ceph-mon[74388]: 9.7 deep-scrub ok
Dec 09 12:05:59 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:59 compute-0 ceph-mon[74388]: from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 55 pg[11.0( v 48'2 (0'0,48'2] local-lis/les=39/40 n=2 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=55 pruub=12.306273460s) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 lcod 48'1 mlcod 48'1 active pruub 187.269851685s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 55 pg[11.0( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=55 pruub=12.306273460s) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 lcod 48'1 mlcod 0'0 unknown pruub 187.269851685s@ mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Dec 09 12:05:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:59 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 09 12:05:59 compute-0 ceph-mon[74388]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 09 12:05:59 compute-0 ceph-mon[74388]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.14( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.5( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.17( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.16( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.15( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.13( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.12( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.1( v 48'2 (0'0,48'2] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.c( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.b( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.d( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.e( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.a( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.9( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.f( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.8( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.2( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=1 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.3( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.6( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.4( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.7( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.18( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.19( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.1a( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.1b( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.1c( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.1d( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.1e( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.1f( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.10( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:05:59 compute-0 ceph-osd[82922]: osd.1 pg_epoch: 56 pg[11.11( v 48'2 lc 0'0 (0'0,48'2] local-lis/les=39/40 n=0 ec=55/39 lis/c=39/39 les/c/f=40/40/0 sis=55) [1] r=0 lpr=55 pi=[39,55)/1 crt=48'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 09 12:06:00 compute-0 podman[98472]: 2025-12-09 12:06:00.064178584 +0000 UTC m=+8.025228543 container create 3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd (image=quay.io/ceph/grafana:10.4.0, name=agitated_shaw, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 podman[98472]: 2025-12-09 12:06:00.039698057 +0000 UTC m=+8.000748046 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 09 12:06:00 compute-0 systemd[1]: Started libpod-conmon-3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd.scope.
Dec 09 12:06:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:06:00 compute-0 podman[98472]: 2025-12-09 12:06:00.161936293 +0000 UTC m=+8.122986282 container init 3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd (image=quay.io/ceph/grafana:10.4.0, name=agitated_shaw, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 podman[98472]: 2025-12-09 12:06:00.174207396 +0000 UTC m=+8.135257355 container start 3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd (image=quay.io/ceph/grafana:10.4.0, name=agitated_shaw, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 podman[98472]: 2025-12-09 12:06:00.178705788 +0000 UTC m=+8.139755747 container attach 3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd (image=quay.io/ceph/grafana:10.4.0, name=agitated_shaw, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 agitated_shaw[99419]: 472 0
Dec 09 12:06:00 compute-0 systemd[1]: libpod-3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd.scope: Deactivated successfully.
Dec 09 12:06:00 compute-0 podman[98472]: 2025-12-09 12:06:00.17975832 +0000 UTC m=+8.140808289 container died 3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd (image=quay.io/ceph/grafana:10.4.0, name=agitated_shaw, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b59a0b1aafd9eb9255cb2df1e5090a6dc2e784e4ee75d291209361308596b6a-merged.mount: Deactivated successfully.
Dec 09 12:06:00 compute-0 podman[98472]: 2025-12-09 12:06:00.230488578 +0000 UTC m=+8.191538537 container remove 3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd (image=quay.io/ceph/grafana:10.4.0, name=agitated_shaw, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 systemd[1]: libpod-conmon-3c20808cc43856876a7d8ae6eba15e5e47f231ead1ef5bce99e749a5908e7fdd.scope: Deactivated successfully.
Dec 09 12:06:00 compute-0 podman[99496]: 2025-12-09 12:06:00.313832537 +0000 UTC m=+0.053076373 container create 7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 systemd[1]: Started libpod-conmon-7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9.scope.
Dec 09 12:06:00 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 09 12:06:00 compute-0 ceph-osd[82922]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 09 12:06:00 compute-0 systemd[1]: Started libcrun container.
Dec 09 12:06:00 compute-0 podman[99496]: 2025-12-09 12:06:00.284421216 +0000 UTC m=+0.023665072 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 09 12:06:00 compute-0 podman[99496]: 2025-12-09 12:06:00.381251356 +0000 UTC m=+0.120495192 container init 7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:06:00 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f40013a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:06:00 compute-0 podman[99496]: 2025-12-09 12:06:00.39001382 +0000 UTC m=+0.129257656 container start 7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 xenodochial_clarke[99521]: 472 0
Dec 09 12:06:00 compute-0 systemd[1]: libpod-7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9.scope: Deactivated successfully.
Dec 09 12:06:00 compute-0 podman[99496]: 2025-12-09 12:06:00.393456418 +0000 UTC m=+0.132700254 container attach 7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 podman[99496]: 2025-12-09 12:06:00.394365017 +0000 UTC m=+0.133608843 container died 7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 ceph-mgr[74679]: [progress INFO root] Writing back 23 completed events
Dec 09 12:06:00 compute-0 ceph-mon[74388]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 09 12:06:00 compute-0 ceph-mon[74388]: log_channel(audit) log [INF] : from='mgr.14466 192.168.122.100:0/3350565451' entity='mgr.compute-0.wfxreg' 
Dec 09 12:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-43cbc2b27eb0447f12ceb11615a6b8e44a2adfdb6a4eb0cba7951d02562767bb-merged.mount: Deactivated successfully.
Dec 09 12:06:00 compute-0 podman[99496]: 2025-12-09 12:06:00.439172219 +0000 UTC m=+0.178416045 container remove 7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec 09 12:06:00 compute-0 systemd[1]: libpod-conmon-7b17242cc13ff85b59303567aaa05292a5c8ec966fcd6d3ae3afb533e4706ee9.scope: Deactivated successfully.
Dec 09 12:06:00 compute-0 systemd[1]: Reloading.
Dec 09 12:06:00 compute-0 ceph-mgr[74679]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 1 peering, 1 active+clean+scrubbing+deep, 62 unknown, 289 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 09 12:06:00 compute-0 ceph-750b57e3-924f-51a5-ab09-01517535f732-nfs-cephfs-2-0-compute-0-mbjryf[96654]: 09/12/2025 12:06:00 : epoch 69381080 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5f8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 09 12:06:00 compute-0 systemd-rc-local-generator[99577]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 12:06:00 compute-0 systemd-sysv-generator[99582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
