Dec 08 09:08:34 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 08 09:08:34 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 08 09:08:34 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 08 09:08:34 localhost kernel: BIOS-provided physical RAM map:
Dec 08 09:08:34 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 08 09:08:34 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 08 09:08:34 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 08 09:08:34 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 08 09:08:34 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 08 09:08:34 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 08 09:08:34 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 08 09:08:34 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 08 09:08:34 localhost kernel: NX (Execute Disable) protection: active
Dec 08 09:08:34 localhost kernel: APIC: Static calls initialized
Dec 08 09:08:34 localhost kernel: SMBIOS 2.8 present.
Dec 08 09:08:34 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 08 09:08:34 localhost kernel: Hypervisor detected: KVM
Dec 08 09:08:34 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 08 09:08:34 localhost kernel: kvm-clock: using sched offset of 4296279001 cycles
Dec 08 09:08:34 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 08 09:08:34 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 08 09:08:34 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 08 09:08:34 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 08 09:08:34 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 08 09:08:34 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 08 09:08:34 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 08 09:08:34 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 08 09:08:34 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 08 09:08:34 localhost kernel: Using GB pages for direct mapping
Dec 08 09:08:34 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 08 09:08:34 localhost kernel: ACPI: Early table checksum verification disabled
Dec 08 09:08:34 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 08 09:08:34 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 08 09:08:34 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 08 09:08:34 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 08 09:08:34 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 08 09:08:34 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 08 09:08:34 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 08 09:08:34 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 08 09:08:34 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 08 09:08:34 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 08 09:08:34 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 08 09:08:34 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 08 09:08:34 localhost kernel: No NUMA configuration found
Dec 08 09:08:34 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 08 09:08:34 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 08 09:08:34 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 08 09:08:34 localhost kernel: Zone ranges:
Dec 08 09:08:34 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 08 09:08:34 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 08 09:08:34 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 08 09:08:34 localhost kernel:   Device   empty
Dec 08 09:08:34 localhost kernel: Movable zone start for each node
Dec 08 09:08:34 localhost kernel: Early memory node ranges
Dec 08 09:08:34 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 08 09:08:34 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 08 09:08:34 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 08 09:08:34 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 08 09:08:34 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 08 09:08:34 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 08 09:08:34 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 08 09:08:34 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 08 09:08:34 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 08 09:08:34 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 08 09:08:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 08 09:08:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 08 09:08:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 08 09:08:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 08 09:08:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 08 09:08:34 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 08 09:08:34 localhost kernel: TSC deadline timer available
Dec 08 09:08:34 localhost kernel: CPU topo: Max. logical packages:   8
Dec 08 09:08:34 localhost kernel: CPU topo: Max. logical dies:       8
Dec 08 09:08:34 localhost kernel: CPU topo: Max. dies per package:   1
Dec 08 09:08:34 localhost kernel: CPU topo: Max. threads per core:   1
Dec 08 09:08:34 localhost kernel: CPU topo: Num. cores per package:     1
Dec 08 09:08:34 localhost kernel: CPU topo: Num. threads per package:   1
Dec 08 09:08:34 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 08 09:08:34 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 08 09:08:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 08 09:08:34 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 08 09:08:34 localhost kernel: Booting paravirtualized kernel on KVM
Dec 08 09:08:34 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 08 09:08:34 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 08 09:08:34 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 08 09:08:34 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 08 09:08:34 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 08 09:08:34 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 08 09:08:34 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 08 09:08:34 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 08 09:08:34 localhost kernel: random: crng init done
Dec 08 09:08:34 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 08 09:08:34 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 08 09:08:34 localhost kernel: Fallback order for Node 0: 0 
Dec 08 09:08:34 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 08 09:08:34 localhost kernel: Policy zone: Normal
Dec 08 09:08:34 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 08 09:08:34 localhost kernel: software IO TLB: area num 8.
Dec 08 09:08:34 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 08 09:08:34 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 08 09:08:34 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 08 09:08:34 localhost kernel: Dynamic Preempt: voluntary
Dec 08 09:08:34 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 08 09:08:34 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 08 09:08:34 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 08 09:08:34 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 08 09:08:34 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 08 09:08:34 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 08 09:08:34 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 08 09:08:34 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 08 09:08:34 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 08 09:08:34 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 08 09:08:34 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 08 09:08:34 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 08 09:08:34 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 08 09:08:34 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 08 09:08:34 localhost kernel: Console: colour VGA+ 80x25
Dec 08 09:08:34 localhost kernel: printk: console [ttyS0] enabled
Dec 08 09:08:34 localhost kernel: ACPI: Core revision 20230331
Dec 08 09:08:34 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 08 09:08:34 localhost kernel: x2apic enabled
Dec 08 09:08:34 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 08 09:08:34 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 08 09:08:34 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 08 09:08:34 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 08 09:08:34 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 08 09:08:34 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 08 09:08:34 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 08 09:08:34 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 08 09:08:34 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 08 09:08:34 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 08 09:08:34 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 08 09:08:34 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 08 09:08:34 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 08 09:08:34 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 08 09:08:34 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 08 09:08:34 localhost kernel: x86/bugs: return thunk changed
Dec 08 09:08:34 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 08 09:08:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 08 09:08:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 08 09:08:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 08 09:08:34 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 08 09:08:34 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 08 09:08:34 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 08 09:08:34 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 08 09:08:34 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 08 09:08:34 localhost kernel: landlock: Up and running.
Dec 08 09:08:34 localhost kernel: Yama: becoming mindful.
Dec 08 09:08:34 localhost kernel: SELinux:  Initializing.
Dec 08 09:08:34 localhost kernel: LSM support for eBPF active
Dec 08 09:08:34 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 08 09:08:34 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 08 09:08:34 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 08 09:08:34 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 08 09:08:34 localhost kernel: ... version:                0
Dec 08 09:08:34 localhost kernel: ... bit width:              48
Dec 08 09:08:34 localhost kernel: ... generic registers:      6
Dec 08 09:08:34 localhost kernel: ... value mask:             0000ffffffffffff
Dec 08 09:08:34 localhost kernel: ... max period:             00007fffffffffff
Dec 08 09:08:34 localhost kernel: ... fixed-purpose events:   0
Dec 08 09:08:34 localhost kernel: ... event mask:             000000000000003f
Dec 08 09:08:34 localhost kernel: signal: max sigframe size: 1776
Dec 08 09:08:34 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 08 09:08:34 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 08 09:08:34 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 08 09:08:34 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 08 09:08:34 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 08 09:08:34 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 08 09:08:34 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 08 09:08:34 localhost kernel: node 0 deferred pages initialised in 37ms
Dec 08 09:08:34 localhost kernel: Memory: 7763552K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec 08 09:08:34 localhost kernel: devtmpfs: initialized
Dec 08 09:08:34 localhost kernel: x86/mm: Memory block size: 128MB
Dec 08 09:08:34 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 08 09:08:34 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 08 09:08:34 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 08 09:08:34 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 08 09:08:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 08 09:08:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 08 09:08:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 08 09:08:34 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 08 09:08:34 localhost kernel: audit: type=2000 audit(1765184910.995:1): state=initialized audit_enabled=0 res=1
Dec 08 09:08:34 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 08 09:08:34 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 08 09:08:34 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 08 09:08:34 localhost kernel: cpuidle: using governor menu
Dec 08 09:08:34 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 08 09:08:34 localhost kernel: PCI: Using configuration type 1 for base access
Dec 08 09:08:34 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 08 09:08:34 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 08 09:08:34 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 08 09:08:34 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 08 09:08:34 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 08 09:08:34 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 08 09:08:34 localhost kernel: Demotion targets for Node 0: null
Dec 08 09:08:34 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 08 09:08:34 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 08 09:08:34 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 08 09:08:34 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 08 09:08:34 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 08 09:08:34 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 08 09:08:34 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 08 09:08:34 localhost kernel: ACPI: Interpreter enabled
Dec 08 09:08:34 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 08 09:08:34 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 08 09:08:34 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 08 09:08:34 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 08 09:08:34 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 08 09:08:34 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 08 09:08:34 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [3] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [4] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [5] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [6] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [7] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [8] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [9] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [10] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [11] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [12] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [13] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [14] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [15] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [16] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [17] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [18] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [19] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [20] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [21] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [22] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [23] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [24] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [25] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [26] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [27] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [28] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [29] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [30] registered
Dec 08 09:08:34 localhost kernel: acpiphp: Slot [31] registered
Dec 08 09:08:34 localhost kernel: PCI host bridge to bus 0000:00
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 08 09:08:34 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 08 09:08:34 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 08 09:08:34 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 08 09:08:34 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 08 09:08:34 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 08 09:08:34 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 08 09:08:34 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 08 09:08:34 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 08 09:08:34 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 08 09:08:34 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 08 09:08:34 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 08 09:08:34 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 08 09:08:34 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 08 09:08:34 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 08 09:08:34 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 08 09:08:34 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 08 09:08:34 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 08 09:08:34 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 08 09:08:34 localhost kernel: iommu: Default domain type: Translated
Dec 08 09:08:34 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 08 09:08:34 localhost kernel: SCSI subsystem initialized
Dec 08 09:08:34 localhost kernel: ACPI: bus type USB registered
Dec 08 09:08:34 localhost kernel: usbcore: registered new interface driver usbfs
Dec 08 09:08:34 localhost kernel: usbcore: registered new interface driver hub
Dec 08 09:08:34 localhost kernel: usbcore: registered new device driver usb
Dec 08 09:08:34 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 08 09:08:34 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 08 09:08:34 localhost kernel: PTP clock support registered
Dec 08 09:08:34 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 08 09:08:34 localhost kernel: NetLabel: Initializing
Dec 08 09:08:34 localhost kernel: NetLabel:  domain hash size = 128
Dec 08 09:08:34 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 08 09:08:34 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 08 09:08:34 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 08 09:08:34 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 08 09:08:34 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 08 09:08:34 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 08 09:08:34 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 08 09:08:34 localhost kernel: vgaarb: loaded
Dec 08 09:08:34 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 08 09:08:34 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 08 09:08:34 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 08 09:08:34 localhost kernel: pnp: PnP ACPI init
Dec 08 09:08:34 localhost kernel: pnp 00:03: [dma 2]
Dec 08 09:08:34 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 08 09:08:34 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 08 09:08:34 localhost kernel: NET: Registered PF_INET protocol family
Dec 08 09:08:34 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 08 09:08:34 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 08 09:08:34 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 08 09:08:34 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 08 09:08:34 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 08 09:08:34 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 08 09:08:34 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 08 09:08:34 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 08 09:08:34 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 08 09:08:34 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 08 09:08:34 localhost kernel: NET: Registered PF_XDP protocol family
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 08 09:08:34 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 08 09:08:34 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 08 09:08:34 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 08 09:08:34 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 75243 usecs
Dec 08 09:08:34 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 08 09:08:34 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 08 09:08:34 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 08 09:08:34 localhost kernel: ACPI: bus type thunderbolt registered
Dec 08 09:08:34 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 08 09:08:34 localhost kernel: Initialise system trusted keyrings
Dec 08 09:08:34 localhost kernel: Key type blacklist registered
Dec 08 09:08:34 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 08 09:08:34 localhost kernel: zbud: loaded
Dec 08 09:08:34 localhost kernel: integrity: Platform Keyring initialized
Dec 08 09:08:34 localhost kernel: integrity: Machine keyring initialized
Dec 08 09:08:34 localhost kernel: Freeing initrd memory: 87804K
Dec 08 09:08:34 localhost kernel: NET: Registered PF_ALG protocol family
Dec 08 09:08:34 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 08 09:08:34 localhost kernel: Key type asymmetric registered
Dec 08 09:08:34 localhost kernel: Asymmetric key parser 'x509' registered
Dec 08 09:08:34 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 08 09:08:34 localhost kernel: io scheduler mq-deadline registered
Dec 08 09:08:34 localhost kernel: io scheduler kyber registered
Dec 08 09:08:34 localhost kernel: io scheduler bfq registered
Dec 08 09:08:34 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 08 09:08:34 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 08 09:08:34 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 08 09:08:34 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 08 09:08:34 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 08 09:08:34 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 08 09:08:34 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 08 09:08:34 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 08 09:08:34 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 08 09:08:34 localhost kernel: Non-volatile memory driver v1.3
Dec 08 09:08:34 localhost kernel: rdac: device handler registered
Dec 08 09:08:34 localhost kernel: hp_sw: device handler registered
Dec 08 09:08:34 localhost kernel: emc: device handler registered
Dec 08 09:08:34 localhost kernel: alua: device handler registered
Dec 08 09:08:34 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 08 09:08:34 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 08 09:08:34 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 08 09:08:34 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 08 09:08:34 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 08 09:08:34 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 08 09:08:34 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 08 09:08:34 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 08 09:08:34 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 08 09:08:34 localhost kernel: hub 1-0:1.0: USB hub found
Dec 08 09:08:34 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 08 09:08:34 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 08 09:08:34 localhost kernel: usbserial: USB Serial support registered for generic
Dec 08 09:08:34 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 08 09:08:34 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 08 09:08:34 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 08 09:08:34 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 08 09:08:34 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 08 09:08:34 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 08 09:08:34 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 08 09:08:34 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-08T09:08:33 UTC (1765184913)
Dec 08 09:08:34 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 08 09:08:34 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 08 09:08:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 08 09:08:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 08 09:08:34 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 08 09:08:34 localhost kernel: usbcore: registered new interface driver usbhid
Dec 08 09:08:34 localhost kernel: usbhid: USB HID core driver
Dec 08 09:08:34 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 08 09:08:34 localhost kernel: Initializing XFRM netlink socket
Dec 08 09:08:34 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 08 09:08:34 localhost kernel: Segment Routing with IPv6
Dec 08 09:08:34 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 08 09:08:34 localhost kernel: mpls_gso: MPLS GSO support
Dec 08 09:08:34 localhost kernel: IPI shorthand broadcast: enabled
Dec 08 09:08:34 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 08 09:08:34 localhost kernel: AES CTR mode by8 optimization enabled
Dec 08 09:08:34 localhost kernel: sched_clock: Marking stable (3073002929, 157195430)->(3471142419, -240944060)
Dec 08 09:08:34 localhost kernel: registered taskstats version 1
Dec 08 09:08:34 localhost kernel: Loading compiled-in X.509 certificates
Dec 08 09:08:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 08 09:08:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 08 09:08:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 08 09:08:34 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 08 09:08:34 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 08 09:08:34 localhost kernel: Demotion targets for Node 0: null
Dec 08 09:08:34 localhost kernel: page_owner is disabled
Dec 08 09:08:34 localhost kernel: Key type .fscrypt registered
Dec 08 09:08:34 localhost kernel: Key type fscrypt-provisioning registered
Dec 08 09:08:34 localhost kernel: Key type big_key registered
Dec 08 09:08:34 localhost kernel: Key type encrypted registered
Dec 08 09:08:34 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 08 09:08:34 localhost kernel: Loading compiled-in module X.509 certificates
Dec 08 09:08:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 08 09:08:34 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 08 09:08:34 localhost kernel: ima: No architecture policies found
Dec 08 09:08:34 localhost kernel: evm: Initialising EVM extended attributes:
Dec 08 09:08:34 localhost kernel: evm: security.selinux
Dec 08 09:08:34 localhost kernel: evm: security.SMACK64 (disabled)
Dec 08 09:08:34 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 08 09:08:34 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 08 09:08:34 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 08 09:08:34 localhost kernel: evm: security.apparmor (disabled)
Dec 08 09:08:34 localhost kernel: evm: security.ima
Dec 08 09:08:34 localhost kernel: evm: security.capability
Dec 08 09:08:34 localhost kernel: evm: HMAC attrs: 0x1
Dec 08 09:08:34 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 08 09:08:34 localhost kernel: Running certificate verification RSA selftest
Dec 08 09:08:34 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 08 09:08:34 localhost kernel: Running certificate verification ECDSA selftest
Dec 08 09:08:34 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 08 09:08:34 localhost kernel: clk: Disabling unused clocks
Dec 08 09:08:34 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 08 09:08:34 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 08 09:08:34 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 08 09:08:34 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 08 09:08:34 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 08 09:08:34 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 08 09:08:34 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 08 09:08:34 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 08 09:08:34 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 08 09:08:34 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 08 09:08:34 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 08 09:08:34 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 08 09:08:34 localhost kernel: Run /init as init process
Dec 08 09:08:34 localhost kernel:   with arguments:
Dec 08 09:08:34 localhost kernel:     /init
Dec 08 09:08:34 localhost kernel:   with environment:
Dec 08 09:08:34 localhost kernel:     HOME=/
Dec 08 09:08:34 localhost kernel:     TERM=linux
Dec 08 09:08:34 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 08 09:08:34 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 08 09:08:34 localhost systemd[1]: Detected virtualization kvm.
Dec 08 09:08:34 localhost systemd[1]: Detected architecture x86-64.
Dec 08 09:08:34 localhost systemd[1]: Running in initrd.
Dec 08 09:08:34 localhost systemd[1]: No hostname configured, using default hostname.
Dec 08 09:08:34 localhost systemd[1]: Hostname set to <localhost>.
Dec 08 09:08:34 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 08 09:08:34 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 08 09:08:34 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 08 09:08:34 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 08 09:08:34 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 08 09:08:34 localhost systemd[1]: Reached target Local File Systems.
Dec 08 09:08:34 localhost systemd[1]: Reached target Path Units.
Dec 08 09:08:34 localhost systemd[1]: Reached target Slice Units.
Dec 08 09:08:34 localhost systemd[1]: Reached target Swaps.
Dec 08 09:08:34 localhost systemd[1]: Reached target Timer Units.
Dec 08 09:08:34 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 08 09:08:34 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 08 09:08:34 localhost systemd[1]: Listening on Journal Socket.
Dec 08 09:08:34 localhost systemd[1]: Listening on udev Control Socket.
Dec 08 09:08:34 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 08 09:08:34 localhost systemd[1]: Reached target Socket Units.
Dec 08 09:08:34 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 08 09:08:34 localhost systemd[1]: Starting Journal Service...
Dec 08 09:08:34 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 08 09:08:34 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 08 09:08:34 localhost systemd[1]: Starting Create System Users...
Dec 08 09:08:34 localhost systemd[1]: Starting Setup Virtual Console...
Dec 08 09:08:34 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 08 09:08:34 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 08 09:08:34 localhost systemd[1]: Finished Create System Users.
Dec 08 09:08:34 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 08 09:08:34 localhost systemd-journald[309]: Journal started
Dec 08 09:08:34 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/68fc26ba4a3944e48573ef42d1fd205b) is 8.0M, max 153.6M, 145.6M free.
Dec 08 09:08:34 localhost systemd-sysusers[314]: Creating group 'users' with GID 100.
Dec 08 09:08:34 localhost systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Dec 08 09:08:34 localhost systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 08 09:08:34 localhost systemd[1]: Started Journal Service.
Dec 08 09:08:34 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 08 09:08:34 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 08 09:08:34 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 08 09:08:34 localhost systemd[1]: Finished Setup Virtual Console.
Dec 08 09:08:34 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 08 09:08:34 localhost systemd[1]: Starting dracut cmdline hook...
Dec 08 09:08:34 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Dec 08 09:08:34 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 08 09:08:34 localhost systemd[1]: Finished dracut cmdline hook.
Dec 08 09:08:34 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 08 09:08:34 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 08 09:08:34 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 08 09:08:34 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 08 09:08:34 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 08 09:08:34 localhost kernel: RPC: Registered udp transport module.
Dec 08 09:08:34 localhost kernel: RPC: Registered tcp transport module.
Dec 08 09:08:34 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 08 09:08:34 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 08 09:08:34 localhost rpc.statd[445]: Version 2.5.4 starting
Dec 08 09:08:34 localhost rpc.statd[445]: Initializing NSM state
Dec 08 09:08:34 localhost rpc.idmapd[450]: Setting log level to 0
Dec 08 09:08:34 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 08 09:08:34 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 08 09:08:34 localhost systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Dec 08 09:08:34 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 08 09:08:34 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 08 09:08:35 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 08 09:08:35 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 08 09:08:35 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 08 09:08:35 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 08 09:08:35 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 08 09:08:35 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 08 09:08:35 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 08 09:08:35 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 08 09:08:35 localhost systemd[1]: Reached target Network.
Dec 08 09:08:35 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 08 09:08:35 localhost systemd[1]: Starting dracut initqueue hook...
Dec 08 09:08:35 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 08 09:08:35 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 08 09:08:35 localhost kernel:  vda: vda1
Dec 08 09:08:35 localhost systemd-udevd[483]: Network interface NamePolicy= disabled on kernel command line.
Dec 08 09:08:35 localhost kernel: libata version 3.00 loaded.
Dec 08 09:08:35 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 08 09:08:35 localhost kernel: scsi host0: ata_piix
Dec 08 09:08:35 localhost kernel: scsi host1: ata_piix
Dec 08 09:08:35 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 08 09:08:35 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 08 09:08:35 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 08 09:08:35 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 08 09:08:35 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 08 09:08:35 localhost systemd[1]: Reached target Initrd Root Device.
Dec 08 09:08:35 localhost systemd[1]: Reached target System Initialization.
Dec 08 09:08:35 localhost systemd[1]: Reached target Basic System.
Dec 08 09:08:35 localhost kernel: ata1: found unknown device (class 0)
Dec 08 09:08:35 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 08 09:08:35 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 08 09:08:35 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 08 09:08:35 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 08 09:08:35 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 08 09:08:35 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 08 09:08:35 localhost systemd[1]: Finished dracut initqueue hook.
Dec 08 09:08:35 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 08 09:08:35 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 08 09:08:35 localhost systemd[1]: Reached target Remote File Systems.
Dec 08 09:08:35 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 08 09:08:35 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 08 09:08:35 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 08 09:08:35 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Dec 08 09:08:35 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 08 09:08:35 localhost systemd[1]: Mounting /sysroot...
Dec 08 09:08:36 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 08 09:08:36 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 08 09:08:36 localhost kernel: XFS (vda1): Ending clean mount
Dec 08 09:08:36 localhost systemd[1]: Mounted /sysroot.
Dec 08 09:08:36 localhost systemd[1]: Reached target Initrd Root File System.
Dec 08 09:08:36 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 08 09:08:36 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 08 09:08:36 localhost systemd[1]: Reached target Initrd File Systems.
Dec 08 09:08:36 localhost systemd[1]: Reached target Initrd Default Target.
Dec 08 09:08:36 localhost systemd[1]: Starting dracut mount hook...
Dec 08 09:08:36 localhost systemd[1]: Finished dracut mount hook.
Dec 08 09:08:36 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 08 09:08:36 localhost rpc.idmapd[450]: exiting on signal 15
Dec 08 09:08:36 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 08 09:08:36 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 08 09:08:36 localhost systemd[1]: Stopped target Network.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Timer Units.
Dec 08 09:08:36 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 08 09:08:36 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Basic System.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Path Units.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Remote File Systems.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Slice Units.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Socket Units.
Dec 08 09:08:36 localhost systemd[1]: Stopped target System Initialization.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Local File Systems.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Swaps.
Dec 08 09:08:36 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped dracut mount hook.
Dec 08 09:08:36 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 08 09:08:36 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 08 09:08:36 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 08 09:08:36 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 08 09:08:36 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 08 09:08:36 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 08 09:08:36 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 08 09:08:36 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 08 09:08:36 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 08 09:08:36 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 08 09:08:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 08 09:08:36 localhost systemd[1]: systemd-udevd.service: Consumed 1.039s CPU time.
Dec 08 09:08:36 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Closed udev Control Socket.
Dec 08 09:08:36 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Closed udev Kernel Socket.
Dec 08 09:08:36 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 08 09:08:36 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 08 09:08:36 localhost systemd[1]: Starting Cleanup udev Database...
Dec 08 09:08:36 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 08 09:08:36 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 08 09:08:36 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Stopped Create System Users.
Dec 08 09:08:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 08 09:08:36 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 08 09:08:36 localhost systemd[1]: Finished Cleanup udev Database.
Dec 08 09:08:36 localhost systemd[1]: Reached target Switch Root.
Dec 08 09:08:36 localhost systemd[1]: Starting Switch Root...
Dec 08 09:08:36 localhost systemd[1]: Switching root.
Dec 08 09:08:36 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Dec 08 09:08:36 localhost systemd-journald[309]: Journal stopped
Dec 08 09:08:37 localhost kernel: audit: type=1404 audit(1765184916.873:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 08 09:08:37 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 08 09:08:37 localhost kernel: SELinux:  policy capability open_perms=1
Dec 08 09:08:37 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 08 09:08:37 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 08 09:08:37 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 08 09:08:37 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 08 09:08:37 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 08 09:08:37 localhost kernel: audit: type=1403 audit(1765184917.025:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 08 09:08:37 localhost systemd[1]: Successfully loaded SELinux policy in 154.835ms.
Dec 08 09:08:37 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 107.925ms.
Dec 08 09:08:37 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 08 09:08:37 localhost systemd[1]: Detected virtualization kvm.
Dec 08 09:08:37 localhost systemd[1]: Detected architecture x86-64.
Dec 08 09:08:37 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:08:37 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 08 09:08:37 localhost systemd[1]: Stopped Switch Root.
Dec 08 09:08:37 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 08 09:08:37 localhost systemd[1]: Created slice Slice /system/getty.
Dec 08 09:08:37 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 08 09:08:37 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 08 09:08:37 localhost systemd[1]: Created slice User and Session Slice.
Dec 08 09:08:37 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 08 09:08:37 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 08 09:08:37 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 08 09:08:37 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 08 09:08:37 localhost systemd[1]: Stopped target Switch Root.
Dec 08 09:08:37 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 08 09:08:37 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 08 09:08:37 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 08 09:08:37 localhost systemd[1]: Reached target Path Units.
Dec 08 09:08:37 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 08 09:08:37 localhost systemd[1]: Reached target Slice Units.
Dec 08 09:08:37 localhost systemd[1]: Reached target Swaps.
Dec 08 09:08:37 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 08 09:08:37 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 08 09:08:37 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 08 09:08:37 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 08 09:08:37 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 08 09:08:37 localhost systemd[1]: Listening on udev Control Socket.
Dec 08 09:08:37 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 08 09:08:37 localhost systemd[1]: Mounting Huge Pages File System...
Dec 08 09:08:37 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 08 09:08:37 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 08 09:08:37 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 08 09:08:37 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 08 09:08:37 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 08 09:08:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 08 09:08:37 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 08 09:08:37 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 08 09:08:37 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 08 09:08:37 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 08 09:08:37 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 08 09:08:37 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 08 09:08:37 localhost systemd[1]: Stopped Journal Service.
Dec 08 09:08:37 localhost systemd[1]: Starting Journal Service...
Dec 08 09:08:37 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 08 09:08:37 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 08 09:08:37 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 08 09:08:37 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 08 09:08:37 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 08 09:08:37 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 08 09:08:37 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 08 09:08:37 localhost systemd[1]: Mounted Huge Pages File System.
Dec 08 09:08:37 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 08 09:08:37 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 08 09:08:37 localhost systemd-journald[678]: Journal started
Dec 08 09:08:37 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 08 09:08:37 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 08 09:08:37 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 08 09:08:37 localhost systemd[1]: Started Journal Service.
Dec 08 09:08:37 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 08 09:08:37 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 08 09:08:37 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 08 09:08:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 08 09:08:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 08 09:08:37 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 08 09:08:37 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 08 09:08:37 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 08 09:08:37 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 08 09:08:37 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 08 09:08:37 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 08 09:08:37 localhost kernel: fuse: init (API version 7.37)
Dec 08 09:08:37 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 08 09:08:37 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 08 09:08:37 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 08 09:08:37 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 08 09:08:37 localhost systemd[1]: Starting Create System Users...
Dec 08 09:08:37 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 08 09:08:37 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 08 09:08:37 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 08 09:08:37 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 08 09:08:37 localhost systemd-journald[678]: Received client request to flush runtime journal.
Dec 08 09:08:37 localhost systemd[1]: Mounting FUSE Control File System...
Dec 08 09:08:37 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 08 09:08:37 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 08 09:08:37 localhost kernel: ACPI: bus type drm_connector registered
Dec 08 09:08:37 localhost systemd[1]: Mounted FUSE Control File System.
Dec 08 09:08:37 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 08 09:08:37 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 08 09:08:37 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 08 09:08:37 localhost systemd[1]: Finished Create System Users.
Dec 08 09:08:37 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 08 09:08:37 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 08 09:08:37 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 08 09:08:37 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 08 09:08:37 localhost systemd[1]: Reached target Local File Systems.
Dec 08 09:08:37 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 08 09:08:37 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 08 09:08:37 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 08 09:08:37 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 08 09:08:37 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 08 09:08:37 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 08 09:08:37 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 08 09:08:37 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Dec 08 09:08:37 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 08 09:08:37 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 08 09:08:37 localhost systemd[1]: Starting Security Auditing Service...
Dec 08 09:08:37 localhost systemd[1]: Starting RPC Bind...
Dec 08 09:08:37 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 08 09:08:37 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 08 09:08:37 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 08 09:08:37 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 08 09:08:37 localhost systemd[1]: Started RPC Bind.
Dec 08 09:08:38 localhost augenrules[706]: /sbin/augenrules: No change
Dec 08 09:08:38 localhost augenrules[721]: No rules
Dec 08 09:08:38 localhost augenrules[721]: enabled 1
Dec 08 09:08:38 localhost augenrules[721]: failure 1
Dec 08 09:08:38 localhost augenrules[721]: pid 701
Dec 08 09:08:38 localhost augenrules[721]: rate_limit 0
Dec 08 09:08:38 localhost augenrules[721]: backlog_limit 8192
Dec 08 09:08:38 localhost augenrules[721]: lost 0
Dec 08 09:08:38 localhost augenrules[721]: backlog 3
Dec 08 09:08:38 localhost augenrules[721]: backlog_wait_time 60000
Dec 08 09:08:38 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 08 09:08:38 localhost augenrules[721]: enabled 1
Dec 08 09:08:38 localhost augenrules[721]: failure 1
Dec 08 09:08:38 localhost augenrules[721]: pid 701
Dec 08 09:08:38 localhost augenrules[721]: rate_limit 0
Dec 08 09:08:38 localhost augenrules[721]: backlog_limit 8192
Dec 08 09:08:38 localhost augenrules[721]: lost 0
Dec 08 09:08:38 localhost augenrules[721]: backlog 0
Dec 08 09:08:38 localhost augenrules[721]: backlog_wait_time 60000
Dec 08 09:08:38 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 08 09:08:38 localhost augenrules[721]: enabled 1
Dec 08 09:08:38 localhost augenrules[721]: failure 1
Dec 08 09:08:38 localhost augenrules[721]: pid 701
Dec 08 09:08:38 localhost augenrules[721]: rate_limit 0
Dec 08 09:08:38 localhost augenrules[721]: backlog_limit 8192
Dec 08 09:08:38 localhost augenrules[721]: lost 0
Dec 08 09:08:38 localhost augenrules[721]: backlog 1
Dec 08 09:08:38 localhost augenrules[721]: backlog_wait_time 60000
Dec 08 09:08:38 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 08 09:08:38 localhost systemd[1]: Started Security Auditing Service.
Dec 08 09:08:38 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 08 09:08:38 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 08 09:08:38 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 08 09:08:38 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 08 09:08:38 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 08 09:08:38 localhost systemd[1]: Starting Update is Completed...
Dec 08 09:08:38 localhost systemd[1]: Finished Update is Completed.
Dec 08 09:08:38 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec 08 09:08:38 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 08 09:08:38 localhost systemd[1]: Reached target System Initialization.
Dec 08 09:08:38 localhost systemd[1]: Started dnf makecache --timer.
Dec 08 09:08:38 localhost systemd[1]: Started Daily rotation of log files.
Dec 08 09:08:38 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 08 09:08:38 localhost systemd[1]: Reached target Timer Units.
Dec 08 09:08:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 08 09:08:38 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 08 09:08:38 localhost systemd[1]: Reached target Socket Units.
Dec 08 09:08:38 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 08 09:08:38 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 08 09:08:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 08 09:08:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 08 09:08:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 08 09:08:38 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 08 09:08:38 localhost systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Dec 08 09:08:39 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 08 09:08:39 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 08 09:08:39 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 08 09:08:39 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 08 09:08:39 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 08 09:08:39 localhost systemd[1]: Reached target Basic System.
Dec 08 09:08:39 localhost dbus-broker-lau[736]: Ready
Dec 08 09:08:39 localhost systemd[1]: Starting NTP client/server...
Dec 08 09:08:39 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 08 09:08:39 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 08 09:08:39 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 08 09:08:39 localhost chronyd[789]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 08 09:08:39 localhost chronyd[789]: Loaded 0 symmetric keys
Dec 08 09:08:39 localhost chronyd[789]: Using right/UTC timezone to obtain leap second data
Dec 08 09:08:39 localhost chronyd[789]: Loaded seccomp filter (level 2)
Dec 08 09:08:39 localhost systemd[1]: Started irqbalance daemon.
Dec 08 09:08:39 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 08 09:08:39 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 08 09:08:39 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 08 09:08:39 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 08 09:08:39 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 08 09:08:39 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 08 09:08:39 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 08 09:08:39 localhost systemd[1]: Starting User Login Management...
Dec 08 09:08:39 localhost systemd[1]: Started NTP client/server.
Dec 08 09:08:39 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 08 09:08:39 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 08 09:08:39 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 08 09:08:39 localhost kernel: Console: switching to colour dummy device 80x25
Dec 08 09:08:39 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 08 09:08:39 localhost kernel: [drm] features: -context_init
Dec 08 09:08:39 localhost kernel: [drm] number of scanouts: 1
Dec 08 09:08:39 localhost kernel: [drm] number of cap sets: 0
Dec 08 09:08:39 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 08 09:08:39 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 08 09:08:39 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 08 09:08:39 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 08 09:08:39 localhost kernel: kvm_amd: TSC scaling supported
Dec 08 09:08:39 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 08 09:08:39 localhost kernel: kvm_amd: Nested Paging enabled
Dec 08 09:08:39 localhost kernel: kvm_amd: LBR virtualization supported
Dec 08 09:08:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 08 09:08:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 08 09:08:39 localhost systemd-logind[795]: New seat seat0.
Dec 08 09:08:39 localhost systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 08 09:08:39 localhost systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 08 09:08:39 localhost systemd[1]: Started User Login Management.
Dec 08 09:08:39 localhost iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Dec 08 09:08:39 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 08 09:08:40 localhost cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 08 Dec 2025 09:08:39 +0000. Up 9.43 seconds.
Dec 08 09:08:40 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 08 09:08:40 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 08 09:08:40 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpknqy5fer.mount: Deactivated successfully.
Dec 08 09:08:40 localhost systemd[1]: Starting Hostname Service...
Dec 08 09:08:40 localhost systemd[1]: Started Hostname Service.
Dec 08 09:08:40 np0005550138.novalocal systemd-hostnamed[853]: Hostname set to <np0005550138.novalocal> (static)
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Reached target Preparation for Network.
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Starting Network Manager...
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.5925] NetworkManager (version 1.54.1-1.el9) is starting... (boot:f484d2fb-c260-4eb2-82b3-d1ac68e69214)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.5931] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.5997] manager[0x555c51cda080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6035] hostname: hostname: using hostnamed
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6036] hostname: static hostname changed from (none) to "np0005550138.novalocal"
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6040] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6134] manager[0x555c51cda080]: rfkill: Wi-Fi hardware radio set enabled
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6135] manager[0x555c51cda080]: rfkill: WWAN hardware radio set enabled
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6183] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6185] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6185] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6185] manager: Networking is enabled by state file
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6187] settings: Loaded settings plugin: keyfile (internal)
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6200] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6236] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6247] dhcp: init: Using DHCP client 'internal'
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6249] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6261] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6268] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6274] device (lo): Activation: starting connection 'lo' (ce717743-79fb-4648-bac2-02cc511629a9)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6282] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6284] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6321] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6324] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6326] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6328] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6330] device (eth0): carrier: link connected
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6332] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6337] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6343] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6346] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6347] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6348] manager: NetworkManager state is now CONNECTING
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6350] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6354] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6357] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6432] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6438] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6455] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Started Network Manager.
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Reached target Network.
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6630] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6631] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6633] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6638] device (lo): Activation: successful, device activated.
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6643] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6647] manager: NetworkManager state is now CONNECTED_SITE
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6649] device (eth0): Activation: successful, device activated.
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6653] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 08 09:08:40 np0005550138.novalocal NetworkManager[857]: <info>  [1765184920.6655] manager: startup complete
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Reached target NFS client services.
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Reached target Remote File Systems.
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 08 09:08:40 np0005550138.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 08 Dec 2025 09:08:41 +0000. Up 10.52 seconds.
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.181         | 255.255.255.0 | global | fa:16:3e:37:1a:77 |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe37:1a77/64 |       .       |  link  | fa:16:3e:37:1a:77 |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 08 09:08:41 np0005550138.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 08 09:08:42 np0005550138.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Dec 08 09:08:42 np0005550138.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 08 09:08:42 np0005550138.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Dec 08 09:08:42 np0005550138.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Dec 08 09:08:42 np0005550138.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Dec 08 09:08:42 np0005550138.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Generating public/private rsa key pair.
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: The key fingerprint is:
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: SHA256:j0rbImBPiDEc/NZb64srx3oL8+38i1CT2TalWUCwzYQ root@np0005550138.novalocal
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: The key's randomart image is:
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: +---[RSA 3072]----+
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |.      .++       |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: | o     E= .      |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |. o .  . o o     |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |o. o . .+ =      |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: | +..  o=S*       |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |. + ....oo.      |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: | . *..o . .      |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |   .*=*=.        |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |   .=**B+o.      |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: The key fingerprint is:
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: SHA256:ADsZLC3lFVa++MIbm8q/HLVDDDMGHTmwT4ZMntSX6gE root@np0005550138.novalocal
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: The key's randomart image is:
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: +---[ECDSA 256]---+
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |   +O+==..       |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |  o*E%=.o        |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |   o@.O+.        |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |     *oB .       |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |     .o.S        |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |     ..+ .       |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |      = +        |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |   . . B .       |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |    ooB.         |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: The key fingerprint is:
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: SHA256:y9yD4SKEGFaiRDOpSY/U5Sage1F2fIMHLygFO84r+hs root@np0005550138.novalocal
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: The key's randomart image is:
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: +--[ED25519 256]--+
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |.B+o=ooo         |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |++**.oo.+        |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |BoB..o.o..       |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |+B *o  .         |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |o = .   S        |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: | . o   + =       |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |. E . . * o      |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |.. . . .   .     |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: |..o.             |
Dec 08 09:08:42 np0005550138.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Reached target Network is Online.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Starting System Logging Service...
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 08 09:08:42 np0005550138.novalocal sm-notify[1004]: Version 2.5.4 starting
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Starting Permit User Sessions...
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 08 09:08:42 np0005550138.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Dec 08 09:08:42 np0005550138.novalocal sshd[1006]: Server listening on :: port 22.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Finished Permit User Sessions.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Started Command Scheduler.
Dec 08 09:08:42 np0005550138.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Dec 08 09:08:42 np0005550138.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Started Getty on tty1.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 08 09:08:42 np0005550138.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Dec 08 09:08:42 np0005550138.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 08 09:08:42 np0005550138.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 96% if used.)
Dec 08 09:08:42 np0005550138.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Reached target Login Prompts.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Started System Logging Service.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Reached target Multi-User System.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 08 09:08:42 np0005550138.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 08 09:08:42 np0005550138.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 08 09:08:42 np0005550138.novalocal sshd-session[1062]: Connection reset by 38.102.83.114 port 59428 [preauth]
Dec 08 09:08:42 np0005550138.novalocal kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Dec 08 09:08:42 np0005550138.novalocal kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec 08 09:08:42 np0005550138.novalocal sshd-session[1067]: Unable to negotiate with 38.102.83.114 port 59436: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 08 09:08:42 np0005550138.novalocal sshd-session[1074]: Connection reset by 38.102.83.114 port 59438 [preauth]
Dec 08 09:08:42 np0005550138.novalocal sshd-session[1086]: Unable to negotiate with 38.102.83.114 port 59444: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 08 09:08:42 np0005550138.novalocal sshd-session[1099]: Unable to negotiate with 38.102.83.114 port 59456: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 08 09:08:42 np0005550138.novalocal sshd-session[1109]: Connection reset by 38.102.83.114 port 59466 [preauth]
Dec 08 09:08:42 np0005550138.novalocal sshd-session[1146]: Unable to negotiate with 38.102.83.114 port 59484: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 08 09:08:43 np0005550138.novalocal sshd-session[1149]: Unable to negotiate with 38.102.83.114 port 59488: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1159]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 08 Dec 2025 09:08:42 +0000. Up 12.42 seconds.
Dec 08 09:08:43 np0005550138.novalocal sshd-session[1130]: Connection closed by 38.102.83.114 port 59472 [preauth]
Dec 08 09:08:43 np0005550138.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 08 09:08:43 np0005550138.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 08 09:08:43 np0005550138.novalocal dracut[1283]: dracut-057-102.git20250818.el9
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1339]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 08 Dec 2025 09:08:43 +0000. Up 12.91 seconds.
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1357]: #############################################################
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1359]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1361]: 256 SHA256:ADsZLC3lFVa++MIbm8q/HLVDDDMGHTmwT4ZMntSX6gE root@np0005550138.novalocal (ECDSA)
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1366]: 256 SHA256:y9yD4SKEGFaiRDOpSY/U5Sage1F2fIMHLygFO84r+hs root@np0005550138.novalocal (ED25519)
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1368]: 3072 SHA256:j0rbImBPiDEc/NZb64srx3oL8+38i1CT2TalWUCwzYQ root@np0005550138.novalocal (RSA)
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1369]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1372]: #############################################################
Dec 08 09:08:43 np0005550138.novalocal cloud-init[1339]: Cloud-init v. 24.4-7.el9 finished at Mon, 08 Dec 2025 09:08:43 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.08 seconds
Dec 08 09:08:43 np0005550138.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 08 09:08:43 np0005550138.novalocal systemd[1]: Reached target Cloud-init target.
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 08 09:08:43 np0005550138.novalocal dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: memstrack is not available
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: memstrack is not available
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 08 09:08:44 np0005550138.novalocal dracut[1285]: *** Including module: systemd ***
Dec 08 09:08:45 np0005550138.novalocal dracut[1285]: *** Including module: fips ***
Dec 08 09:08:45 np0005550138.novalocal dracut[1285]: *** Including module: systemd-initrd ***
Dec 08 09:08:45 np0005550138.novalocal dracut[1285]: *** Including module: i18n ***
Dec 08 09:08:45 np0005550138.novalocal dracut[1285]: *** Including module: drm ***
Dec 08 09:08:45 np0005550138.novalocal chronyd[789]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Dec 08 09:08:45 np0005550138.novalocal chronyd[789]: System clock TAI offset set to 37 seconds
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]: *** Including module: prefixdevname ***
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]: *** Including module: kernel-modules ***
Dec 08 09:08:46 np0005550138.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]: *** Including module: kernel-modules-extra ***
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]: *** Including module: qemu ***
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]: *** Including module: fstab-sys ***
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]: *** Including module: rootfs-block ***
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]: *** Including module: terminfo ***
Dec 08 09:08:46 np0005550138.novalocal dracut[1285]: *** Including module: udev-rules ***
Dec 08 09:08:47 np0005550138.novalocal dracut[1285]: Skipping udev rule: 91-permissions.rules
Dec 08 09:08:47 np0005550138.novalocal dracut[1285]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 08 09:08:47 np0005550138.novalocal dracut[1285]: *** Including module: virtiofs ***
Dec 08 09:08:47 np0005550138.novalocal dracut[1285]: *** Including module: dracut-systemd ***
Dec 08 09:08:47 np0005550138.novalocal dracut[1285]: *** Including module: usrmount ***
Dec 08 09:08:47 np0005550138.novalocal dracut[1285]: *** Including module: base ***
Dec 08 09:08:47 np0005550138.novalocal dracut[1285]: *** Including module: fs-lib ***
Dec 08 09:08:47 np0005550138.novalocal dracut[1285]: *** Including module: kdumpbase ***
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:   microcode_ctl module: mangling fw_dir
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]: *** Including module: openssl ***
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]: *** Including module: shutdown ***
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]: *** Including module: squash ***
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]: *** Including modules done ***
Dec 08 09:08:48 np0005550138.novalocal dracut[1285]: *** Installing kernel module dependencies ***
Dec 08 09:08:49 np0005550138.novalocal dracut[1285]: *** Installing kernel module dependencies done ***
Dec 08 09:08:49 np0005550138.novalocal dracut[1285]: *** Resolving executable dependencies ***
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: IRQ 25 affinity is now unmanaged
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: IRQ 31 affinity is now unmanaged
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: IRQ 28 affinity is now unmanaged
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: IRQ 32 affinity is now unmanaged
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: IRQ 30 affinity is now unmanaged
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 08 09:08:49 np0005550138.novalocal irqbalance[791]: IRQ 29 affinity is now unmanaged
Dec 08 09:08:50 np0005550138.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 08 09:08:51 np0005550138.novalocal dracut[1285]: *** Resolving executable dependencies done ***
Dec 08 09:08:51 np0005550138.novalocal dracut[1285]: *** Generating early-microcode cpio image ***
Dec 08 09:08:51 np0005550138.novalocal dracut[1285]: *** Store current command line parameters ***
Dec 08 09:08:51 np0005550138.novalocal dracut[1285]: Stored kernel commandline:
Dec 08 09:08:51 np0005550138.novalocal dracut[1285]: No dracut internal kernel commandline stored in the initramfs
Dec 08 09:08:51 np0005550138.novalocal dracut[1285]: *** Install squash loader ***
Dec 08 09:08:52 np0005550138.novalocal dracut[1285]: *** Squashing the files inside the initramfs ***
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: *** Squashing the files inside the initramfs done ***
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: *** Hardlinking files ***
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: Mode:           real
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: Files:          50
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: Linked:         0 files
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: Compared:       0 xattrs
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: Compared:       0 files
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: Saved:          0 B
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: Duration:       0.000645 seconds
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: *** Hardlinking files done ***
Dec 08 09:08:53 np0005550138.novalocal dracut[1285]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec 08 09:08:55 np0005550138.novalocal kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Dec 08 09:08:55 np0005550138.novalocal kdumpctl[1012]: kdump: Starting kdump: [OK]
Dec 08 09:08:55 np0005550138.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 08 09:08:55 np0005550138.novalocal systemd[1]: Startup finished in 3.409s (kernel) + 2.909s (initrd) + 18.259s (userspace) = 24.578s.
Dec 08 09:09:08 np0005550138.novalocal sshd-session[4296]: Accepted publickey for zuul from 38.102.83.114 port 52336 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 08 09:09:08 np0005550138.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 08 09:09:08 np0005550138.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 08 09:09:08 np0005550138.novalocal systemd-logind[795]: New session 1 of user zuul.
Dec 08 09:09:08 np0005550138.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 08 09:09:08 np0005550138.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Queued start job for default target Main User Target.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Created slice User Application Slice.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Reached target Paths.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Reached target Timers.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Starting D-Bus User Message Bus Socket...
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Starting Create User's Volatile Files and Directories...
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Listening on D-Bus User Message Bus Socket.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Reached target Sockets.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Finished Create User's Volatile Files and Directories.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Reached target Basic System.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Reached target Main User Target.
Dec 08 09:09:08 np0005550138.novalocal systemd[4300]: Startup finished in 172ms.
Dec 08 09:09:08 np0005550138.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 08 09:09:08 np0005550138.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 08 09:09:08 np0005550138.novalocal sshd-session[4296]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:09:09 np0005550138.novalocal python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:09:10 np0005550138.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 08 09:09:12 np0005550138.novalocal python3[4412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:09:19 np0005550138.novalocal python3[4470]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:09:20 np0005550138.novalocal python3[4510]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 08 09:09:22 np0005550138.novalocal python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNPPDJvGKcIuqeJPP+WbKWWQSzP2AJ/vKgCj78QYpHv/amAovd2vQw3w1iZfWCor/0upP4zWNZmAlvksskv7wb7ZPLhbmqsqKWGaUIk2sv48oO/cncA/qO3rs6C6o1AaGyy1wUBS9ESyom2uAc2Ai3NDfrqxfhcEcMQ56KX43PEQnvA+Z47OmYHmZqSUiJrIrCkMHU5yrc/8xSh1heDBsXdoQkPewf0iuTPY56Y7kvzEkmg4aa89jVT/sZhQSFg97A60CkTUGiDqMew2uCxpbmTRUYUKfe/C9afwqtykmzzUCa6svhRsZyzh7hDPzGFVfeTbkp5ieh01Z94nIuaYnwLVIw2VaOa2Eka34Mkc/OaVPHFmSu42kEU5hJWA3IBkkuyJPMZRHN/8m8C8uTXGiRGlPCOyz6FzyRPav1ypdhQuIvCUoM7fts0ySlGsNXUIOIoDLqSbHvSDpyaxIABMTf9J9OB66RWaa+35TRmHgzmfCZuheI7KyTjaCQuoONpxE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:22 np0005550138.novalocal python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:23 np0005550138.novalocal python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:09:23 np0005550138.novalocal python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765184962.7950375-252-253675559314752/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=1df8dede247c48adb2424ce13fe8a4bf_id_rsa follow=False checksum=18291d8501757c280404496843d3fff4bb4fa318 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:24 np0005550138.novalocal python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:09:24 np0005550138.novalocal python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765184963.7741349-307-9222089671850/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=1df8dede247c48adb2424ce13fe8a4bf_id_rsa.pub follow=False checksum=ec2d2d9aa3ea7e171027eb81d26b909d9e883caa backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:25 np0005550138.novalocal python3[4972]: ansible-ping Invoked with data=pong
Dec 08 09:09:26 np0005550138.novalocal python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:09:31 np0005550138.novalocal python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 08 09:09:32 np0005550138.novalocal python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:32 np0005550138.novalocal python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:32 np0005550138.novalocal python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:33 np0005550138.novalocal python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:33 np0005550138.novalocal python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:33 np0005550138.novalocal python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:35 np0005550138.novalocal sudo[5230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfrqdrgglrswjbxzdmpzwfeeadhuzhko ; /usr/bin/python3'
Dec 08 09:09:35 np0005550138.novalocal sudo[5230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:35 np0005550138.novalocal python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:35 np0005550138.novalocal sudo[5230]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:35 np0005550138.novalocal sudo[5308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iawgnivyljamkzdscslsdpqjptjpzgzs ; /usr/bin/python3'
Dec 08 09:09:35 np0005550138.novalocal sudo[5308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:35 np0005550138.novalocal python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:09:36 np0005550138.novalocal sudo[5308]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:36 np0005550138.novalocal sudo[5381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzaaxajlckkyjandlqcmhdrsmfehtill ; /usr/bin/python3'
Dec 08 09:09:36 np0005550138.novalocal sudo[5381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:36 np0005550138.novalocal python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765184975.5877488-33-242582139751003/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:36 np0005550138.novalocal sudo[5381]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:37 np0005550138.novalocal python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:37 np0005550138.novalocal python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:37 np0005550138.novalocal python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:38 np0005550138.novalocal python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:38 np0005550138.novalocal python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:38 np0005550138.novalocal python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:38 np0005550138.novalocal python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:39 np0005550138.novalocal python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:39 np0005550138.novalocal python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:39 np0005550138.novalocal python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:39 np0005550138.novalocal python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:40 np0005550138.novalocal python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:40 np0005550138.novalocal python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:40 np0005550138.novalocal python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:41 np0005550138.novalocal python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:41 np0005550138.novalocal python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:41 np0005550138.novalocal python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:41 np0005550138.novalocal python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:42 np0005550138.novalocal python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:42 np0005550138.novalocal python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:42 np0005550138.novalocal python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:43 np0005550138.novalocal python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:43 np0005550138.novalocal python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:43 np0005550138.novalocal python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:43 np0005550138.novalocal python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:44 np0005550138.novalocal python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:09:46 np0005550138.novalocal sudo[6055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hudewxxhujmnlbsdoxsuvyvkumtfpvvd ; /usr/bin/python3'
Dec 08 09:09:46 np0005550138.novalocal sudo[6055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:46 np0005550138.novalocal python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 08 09:09:46 np0005550138.novalocal systemd[1]: Starting Time & Date Service...
Dec 08 09:09:47 np0005550138.novalocal systemd[1]: Started Time & Date Service.
Dec 08 09:09:47 np0005550138.novalocal systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Dec 08 09:09:47 np0005550138.novalocal sudo[6055]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:47 np0005550138.novalocal sudo[6086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dturyejgiyolsefltymisqszqhqsudty ; /usr/bin/python3'
Dec 08 09:09:47 np0005550138.novalocal sudo[6086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:47 np0005550138.novalocal python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:47 np0005550138.novalocal sudo[6086]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:47 np0005550138.novalocal python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:09:48 np0005550138.novalocal python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765184987.670254-253-256996414329963/source _original_basename=tmphgnjtpkx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:48 np0005550138.novalocal python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:09:49 np0005550138.novalocal python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765184988.5391579-302-176704796814073/source _original_basename=tmp2b9hk0fh follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:50 np0005550138.novalocal sudo[6506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kizvdtageonyfnjfggldqlchgmbsjuxa ; /usr/bin/python3'
Dec 08 09:09:50 np0005550138.novalocal sudo[6506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:50 np0005550138.novalocal python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:09:50 np0005550138.novalocal sudo[6506]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:50 np0005550138.novalocal sudo[6579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysxxnvgslfgsduhqnibdotyswkwqcfji ; /usr/bin/python3'
Dec 08 09:09:50 np0005550138.novalocal sudo[6579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:50 np0005550138.novalocal python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765184990.018354-382-212308879645951/source _original_basename=tmpumf81zfc follow=False checksum=342f501e01c1098669fc1f1874ec75e7ad7dd27a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:50 np0005550138.novalocal sudo[6579]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:51 np0005550138.novalocal python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:09:51 np0005550138.novalocal python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:09:51 np0005550138.novalocal sudo[6733]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdgkrflqjlfmbpcmsjosohfdowbvdsak ; /usr/bin/python3'
Dec 08 09:09:51 np0005550138.novalocal sudo[6733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:51 np0005550138.novalocal python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:09:51 np0005550138.novalocal sudo[6733]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:52 np0005550138.novalocal sudo[6806]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvanhbjkrhynbecettkwagzhrbplhkcc ; /usr/bin/python3'
Dec 08 09:09:52 np0005550138.novalocal sudo[6806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:52 np0005550138.novalocal python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765184991.6754782-453-18447441269697/source _original_basename=tmph0p6k4w1 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:09:52 np0005550138.novalocal sudo[6806]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:53 np0005550138.novalocal sudo[6857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etetqfnbkudkakjsjmvyydzyneidpjog ; /usr/bin/python3'
Dec 08 09:09:53 np0005550138.novalocal sudo[6857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:09:53 np0005550138.novalocal python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-cef2-4a2f-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:09:53 np0005550138.novalocal sudo[6857]: pam_unix(sudo:session): session closed for user root
Dec 08 09:09:53 np0005550138.novalocal python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-cef2-4a2f-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 08 09:09:55 np0005550138.novalocal python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:10:13 np0005550138.novalocal sudo[6939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stgzhexkxoqcafyhozoikwkgkvvbwegj ; /usr/bin/python3'
Dec 08 09:10:13 np0005550138.novalocal sudo[6939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:10:13 np0005550138.novalocal python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:10:13 np0005550138.novalocal sudo[6939]: pam_unix(sudo:session): session closed for user root
Dec 08 09:10:17 np0005550138.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 08 09:11:13 np0005550138.novalocal sshd-session[4309]: Received disconnect from 38.102.83.114 port 52336:11: disconnected by user
Dec 08 09:11:13 np0005550138.novalocal sshd-session[4309]: Disconnected from user zuul 38.102.83.114 port 52336
Dec 08 09:11:13 np0005550138.novalocal sshd-session[4296]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:11:13 np0005550138.novalocal systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 08 09:11:22 np0005550138.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 08 09:11:22 np0005550138.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.0819] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 08 09:11:22 np0005550138.novalocal systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1081] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1115] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1120] device (eth1): carrier: link connected
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1122] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1128] policy: auto-activating connection 'Wired connection 1' (703c6305-83ed-3018-8231-3481d3d9a233)
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1132] device (eth1): Activation: starting connection 'Wired connection 1' (703c6305-83ed-3018-8231-3481d3d9a233)
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1132] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1134] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1138] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:11:22 np0005550138.novalocal NetworkManager[857]: <info>  [1765185082.1143] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:11:22 np0005550138.novalocal systemd[4300]: Starting Mark boot as successful...
Dec 08 09:11:22 np0005550138.novalocal systemd[4300]: Finished Mark boot as successful.
Dec 08 09:11:22 np0005550138.novalocal sshd-session[6949]: Accepted publickey for zuul from 38.102.83.114 port 43646 ssh2: RSA SHA256:9ILqrOWgZKsXCAP6ek0P69EdElsz9g1+oVZcuSDpYrI
Dec 08 09:11:22 np0005550138.novalocal systemd-logind[795]: New session 3 of user zuul.
Dec 08 09:11:22 np0005550138.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 08 09:11:22 np0005550138.novalocal sshd-session[6949]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:11:22 np0005550138.novalocal python3[6976]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-4210-dbc9-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:11:32 np0005550138.novalocal sudo[7054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykyeefqjckmytbphyxhpgumcqzcmijfs ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 08 09:11:32 np0005550138.novalocal sudo[7054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:11:33 np0005550138.novalocal python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:11:33 np0005550138.novalocal sudo[7054]: pam_unix(sudo:session): session closed for user root
Dec 08 09:11:33 np0005550138.novalocal sudo[7127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmqnaevnnieadljtktvvxdkxmnfxdffm ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 08 09:11:33 np0005550138.novalocal sudo[7127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:11:33 np0005550138.novalocal python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765185092.7590656-155-91260812204427/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=ff814c49632bf6298510fb614528a14e16fb6e4e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:11:33 np0005550138.novalocal sudo[7127]: pam_unix(sudo:session): session closed for user root
Dec 08 09:11:33 np0005550138.novalocal sudo[7177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywqkkanzeyywyykzyxbsyhwlvsjhyim ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 08 09:11:33 np0005550138.novalocal sudo[7177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:11:33 np0005550138.novalocal python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: Stopping Network Manager...
Dec 08 09:11:33 np0005550138.novalocal NetworkManager[857]: <info>  [1765185093.9421] caught SIGTERM, shutting down normally.
Dec 08 09:11:33 np0005550138.novalocal NetworkManager[857]: <info>  [1765185093.9432] dhcp4 (eth0): canceled DHCP transaction
Dec 08 09:11:33 np0005550138.novalocal NetworkManager[857]: <info>  [1765185093.9433] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:11:33 np0005550138.novalocal NetworkManager[857]: <info>  [1765185093.9433] dhcp4 (eth0): state changed no lease
Dec 08 09:11:33 np0005550138.novalocal NetworkManager[857]: <info>  [1765185093.9434] manager: NetworkManager state is now CONNECTING
Dec 08 09:11:33 np0005550138.novalocal NetworkManager[857]: <info>  [1765185093.9529] dhcp4 (eth1): canceled DHCP transaction
Dec 08 09:11:33 np0005550138.novalocal NetworkManager[857]: <info>  [1765185093.9530] dhcp4 (eth1): state changed no lease
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 08 09:11:33 np0005550138.novalocal NetworkManager[857]: <info>  [1765185093.9598] exiting (success)
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: Stopped Network Manager.
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: NetworkManager.service: Consumed 1.250s CPU time, 10.0M memory peak.
Dec 08 09:11:33 np0005550138.novalocal systemd[1]: Starting Network Manager...
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.0172] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:f484d2fb-c260-4eb2-82b3-d1ac68e69214)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.0173] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.0224] manager[0x5578015c0070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 08 09:11:34 np0005550138.novalocal systemd[1]: Starting Hostname Service...
Dec 08 09:11:34 np0005550138.novalocal systemd[1]: Started Hostname Service.
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1074] hostname: hostname: using hostnamed
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1075] hostname: static hostname changed from (none) to "np0005550138.novalocal"
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1083] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1089] manager[0x5578015c0070]: rfkill: Wi-Fi hardware radio set enabled
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1090] manager[0x5578015c0070]: rfkill: WWAN hardware radio set enabled
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1121] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1122] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1122] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1123] manager: Networking is enabled by state file
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1125] settings: Loaded settings plugin: keyfile (internal)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1131] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1165] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1177] dhcp: init: Using DHCP client 'internal'
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1181] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1188] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1196] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1207] device (lo): Activation: starting connection 'lo' (ce717743-79fb-4648-bac2-02cc511629a9)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1216] device (eth0): carrier: link connected
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1222] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1229] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1230] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1239] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1248] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1256] device (eth1): carrier: link connected
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1262] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1268] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (703c6305-83ed-3018-8231-3481d3d9a233) (indicated)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1269] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1275] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1283] device (eth1): Activation: starting connection 'Wired connection 1' (703c6305-83ed-3018-8231-3481d3d9a233)
Dec 08 09:11:34 np0005550138.novalocal systemd[1]: Started Network Manager.
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1292] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1297] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1300] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1302] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1304] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1307] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1312] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1314] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1318] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1325] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1339] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1348] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1351] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1367] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1373] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1378] device (lo): Activation: successful, device activated.
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1407] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1413] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 08 09:11:34 np0005550138.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1463] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1499] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1502] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1506] manager: NetworkManager state is now CONNECTED_SITE
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1510] device (eth0): Activation: successful, device activated.
Dec 08 09:11:34 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185094.1514] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 08 09:11:34 np0005550138.novalocal sudo[7177]: pam_unix(sudo:session): session closed for user root
Dec 08 09:11:34 np0005550138.novalocal python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-4210-dbc9-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:11:44 np0005550138.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 08 09:12:04 np0005550138.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5537] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 08 09:12:19 np0005550138.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 08 09:12:19 np0005550138.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5835] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5838] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5848] device (eth1): Activation: successful, device activated.
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5855] manager: startup complete
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5857] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <warn>  [1765185139.5864] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5872] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 08 09:12:19 np0005550138.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5970] dhcp4 (eth1): canceled DHCP transaction
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5972] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5973] dhcp4 (eth1): state changed no lease
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.5994] policy: auto-activating connection 'ci-private-network' (ee83eef6-1e86-5eac-a45d-fd4a9023a665)
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.6000] device (eth1): Activation: starting connection 'ci-private-network' (ee83eef6-1e86-5eac-a45d-fd4a9023a665)
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.6002] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.6007] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.6017] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.6029] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.6070] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.6074] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:12:19 np0005550138.novalocal NetworkManager[7192]: <info>  [1765185139.6085] device (eth1): Activation: successful, device activated.
Dec 08 09:12:29 np0005550138.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 08 09:12:34 np0005550138.novalocal sshd-session[6952]: Received disconnect from 38.102.83.114 port 43646:11: disconnected by user
Dec 08 09:12:34 np0005550138.novalocal sshd-session[6952]: Disconnected from user zuul 38.102.83.114 port 43646
Dec 08 09:12:34 np0005550138.novalocal sshd-session[6949]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:12:34 np0005550138.novalocal systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Dec 08 09:12:34 np0005550138.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 08 09:12:34 np0005550138.novalocal systemd[1]: session-3.scope: Consumed 1.528s CPU time.
Dec 08 09:12:34 np0005550138.novalocal systemd-logind[795]: Removed session 3.
Dec 08 09:13:07 np0005550138.novalocal sshd-session[7291]: Accepted publickey for zuul from 38.102.83.114 port 36474 ssh2: RSA SHA256:9ILqrOWgZKsXCAP6ek0P69EdElsz9g1+oVZcuSDpYrI
Dec 08 09:13:07 np0005550138.novalocal systemd-logind[795]: New session 4 of user zuul.
Dec 08 09:13:07 np0005550138.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 08 09:13:07 np0005550138.novalocal sshd-session[7291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:13:07 np0005550138.novalocal sudo[7370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhyyivwstugswarxgitctftstchrfciq ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 08 09:13:07 np0005550138.novalocal sudo[7370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:13:08 np0005550138.novalocal python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:13:08 np0005550138.novalocal sudo[7370]: pam_unix(sudo:session): session closed for user root
Dec 08 09:13:08 np0005550138.novalocal sudo[7443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bomksmxrdusvokqzjkfbpzzjbvnvsxrs ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 08 09:13:08 np0005550138.novalocal sudo[7443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:13:08 np0005550138.novalocal python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765185187.7788935-373-43218510246421/source _original_basename=tmpf92onmvc follow=False checksum=8271a5ec204270f8e7a020ec8c20e039e9f6795b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:13:08 np0005550138.novalocal sudo[7443]: pam_unix(sudo:session): session closed for user root
Dec 08 09:13:11 np0005550138.novalocal sshd-session[7294]: Connection closed by 38.102.83.114 port 36474
Dec 08 09:13:11 np0005550138.novalocal sshd-session[7291]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:13:11 np0005550138.novalocal systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Dec 08 09:13:11 np0005550138.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 08 09:13:11 np0005550138.novalocal systemd-logind[795]: Removed session 4.
Dec 08 09:14:25 np0005550138.novalocal systemd[4300]: Created slice User Background Tasks Slice.
Dec 08 09:14:25 np0005550138.novalocal systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Dec 08 09:14:25 np0005550138.novalocal systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Dec 08 09:15:26 np0005550138.novalocal sshd-session[7472]: Connection closed by 101.36.108.248 port 52788
Dec 08 09:15:40 np0005550138.novalocal sshd-session[7473]: Connection closed by 101.36.108.248 port 53098 [preauth]
Dec 08 09:18:33 np0005550138.novalocal sshd-session[7476]: Accepted publickey for zuul from 38.102.83.114 port 58392 ssh2: RSA SHA256:9ILqrOWgZKsXCAP6ek0P69EdElsz9g1+oVZcuSDpYrI
Dec 08 09:18:33 np0005550138.novalocal systemd-logind[795]: New session 5 of user zuul.
Dec 08 09:18:33 np0005550138.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 08 09:18:33 np0005550138.novalocal sshd-session[7476]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:18:33 np0005550138.novalocal sudo[7503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wydehabeaicqclbbodnkqsljhhywuivu ; /usr/bin/python3'
Dec 08 09:18:33 np0005550138.novalocal sudo[7503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:33 np0005550138.novalocal python3[7505]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-7257-7d30-000000001cd4-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:18:33 np0005550138.novalocal sudo[7503]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:33 np0005550138.novalocal sudo[7531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpallfrbyxcybgrmiocazistamawlduf ; /usr/bin/python3'
Dec 08 09:18:33 np0005550138.novalocal sudo[7531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:33 np0005550138.novalocal python3[7533]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:18:33 np0005550138.novalocal sudo[7531]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:33 np0005550138.novalocal sudo[7558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrclnwsqpllstyxllzjoxhftrfpqbqkk ; /usr/bin/python3'
Dec 08 09:18:33 np0005550138.novalocal sudo[7558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:34 np0005550138.novalocal python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:18:34 np0005550138.novalocal sudo[7558]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:34 np0005550138.novalocal sudo[7584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrdfkibohokunedwzaslogpvluwvmytu ; /usr/bin/python3'
Dec 08 09:18:34 np0005550138.novalocal sudo[7584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:34 np0005550138.novalocal python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:18:34 np0005550138.novalocal sudo[7584]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:34 np0005550138.novalocal sudo[7610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kboeulbhxdtwxhfttzzgguwuderepnxd ; /usr/bin/python3'
Dec 08 09:18:34 np0005550138.novalocal sudo[7610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:34 np0005550138.novalocal python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:18:34 np0005550138.novalocal sudo[7610]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:35 np0005550138.novalocal sudo[7636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpjpzdocvzaeiquwkvqrmrblpcfwxyoi ; /usr/bin/python3'
Dec 08 09:18:35 np0005550138.novalocal sudo[7636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:35 np0005550138.novalocal python3[7638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:18:35 np0005550138.novalocal sudo[7636]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:35 np0005550138.novalocal sudo[7714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mllplzzjxrjimnjveasoyuwvxsmyekvq ; /usr/bin/python3'
Dec 08 09:18:35 np0005550138.novalocal sudo[7714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:35 np0005550138.novalocal python3[7716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:18:35 np0005550138.novalocal sudo[7714]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:36 np0005550138.novalocal sudo[7787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alwnpwdcldvpdpgqnidgusybganxpuvh ; /usr/bin/python3'
Dec 08 09:18:36 np0005550138.novalocal sudo[7787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:36 np0005550138.novalocal python3[7789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765185515.6863396-517-29263219504705/source _original_basename=tmptgbw0q_u follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:18:36 np0005550138.novalocal sudo[7787]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:37 np0005550138.novalocal sudo[7837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggnmyxpggipsumupwwrhesanzpcmvvxc ; /usr/bin/python3'
Dec 08 09:18:37 np0005550138.novalocal sudo[7837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:37 np0005550138.novalocal python3[7839]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 08 09:18:37 np0005550138.novalocal systemd[1]: Reloading.
Dec 08 09:18:37 np0005550138.novalocal systemd-rc-local-generator[7861]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:18:37 np0005550138.novalocal sudo[7837]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:38 np0005550138.novalocal sudo[7893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvvlwhhbpstpxwwlljvaruphlpespcek ; /usr/bin/python3'
Dec 08 09:18:38 np0005550138.novalocal sudo[7893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:39 np0005550138.novalocal python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 08 09:18:39 np0005550138.novalocal sudo[7893]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:39 np0005550138.novalocal sudo[7919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sngnjnsubadszlzhnmmzioxsojhjyzug ; /usr/bin/python3'
Dec 08 09:18:39 np0005550138.novalocal sudo[7919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:39 np0005550138.novalocal python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:18:39 np0005550138.novalocal sudo[7919]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:39 np0005550138.novalocal sudo[7947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kojhtcassjbmmkhrgcmyoglvzzxbjodt ; /usr/bin/python3'
Dec 08 09:18:39 np0005550138.novalocal sudo[7947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:39 np0005550138.novalocal python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:18:39 np0005550138.novalocal sudo[7947]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:39 np0005550138.novalocal sudo[7975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfjtfvfcdnjqvgmovhdnckswjdfduwlg ; /usr/bin/python3'
Dec 08 09:18:39 np0005550138.novalocal sudo[7975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:40 np0005550138.novalocal python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:18:40 np0005550138.novalocal sudo[7975]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:40 np0005550138.novalocal sudo[8003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymqcycdxhxnwpftqvuvdpuyumbrqkpxd ; /usr/bin/python3'
Dec 08 09:18:40 np0005550138.novalocal sudo[8003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:40 np0005550138.novalocal python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:18:40 np0005550138.novalocal sudo[8003]: pam_unix(sudo:session): session closed for user root
Dec 08 09:18:40 np0005550138.novalocal python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-7257-7d30-000000001cdb-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:18:41 np0005550138.novalocal python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 08 09:18:44 np0005550138.novalocal sshd-session[7479]: Connection closed by 38.102.83.114 port 58392
Dec 08 09:18:44 np0005550138.novalocal sshd-session[7476]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:18:44 np0005550138.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 08 09:18:44 np0005550138.novalocal systemd[1]: session-5.scope: Consumed 4.168s CPU time.
Dec 08 09:18:44 np0005550138.novalocal systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Dec 08 09:18:44 np0005550138.novalocal systemd-logind[795]: Removed session 5.
Dec 08 09:18:46 np0005550138.novalocal sshd-session[8066]: Accepted publickey for zuul from 38.102.83.114 port 42864 ssh2: RSA SHA256:9ILqrOWgZKsXCAP6ek0P69EdElsz9g1+oVZcuSDpYrI
Dec 08 09:18:46 np0005550138.novalocal systemd-logind[795]: New session 6 of user zuul.
Dec 08 09:18:46 np0005550138.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 08 09:18:46 np0005550138.novalocal sshd-session[8066]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:18:46 np0005550138.novalocal sudo[8093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvmlnbbziqjwvwrwdrinqydzunnhsyfg ; /usr/bin/python3'
Dec 08 09:18:46 np0005550138.novalocal sudo[8093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:18:46 np0005550138.novalocal python3[8095]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 08 09:19:06 np0005550138.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 08 09:19:06 np0005550138.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 08 09:19:06 np0005550138.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 08 09:19:06 np0005550138.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 08 09:19:06 np0005550138.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 08 09:19:06 np0005550138.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 08 09:19:06 np0005550138.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 08 09:19:06 np0005550138.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 08 09:19:18 np0005550138.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 08 09:19:18 np0005550138.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 08 09:19:18 np0005550138.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 08 09:19:18 np0005550138.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 08 09:19:18 np0005550138.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 08 09:19:18 np0005550138.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 08 09:19:18 np0005550138.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 08 09:19:18 np0005550138.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 08 09:19:29 np0005550138.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 08 09:19:29 np0005550138.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 08 09:19:29 np0005550138.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 08 09:19:29 np0005550138.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 08 09:19:29 np0005550138.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 08 09:19:29 np0005550138.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 08 09:19:29 np0005550138.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 08 09:19:29 np0005550138.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 08 09:19:31 np0005550138.novalocal setsebool[8162]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 08 09:19:31 np0005550138.novalocal setsebool[8162]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 08 09:19:43 np0005550138.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 08 09:19:43 np0005550138.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 08 09:19:43 np0005550138.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 08 09:19:43 np0005550138.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 08 09:19:43 np0005550138.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 08 09:19:43 np0005550138.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 08 09:19:43 np0005550138.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 08 09:19:43 np0005550138.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 08 09:20:01 np0005550138.novalocal dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 08 09:20:01 np0005550138.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 08 09:20:01 np0005550138.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 08 09:20:01 np0005550138.novalocal systemd[1]: Reloading.
Dec 08 09:20:02 np0005550138.novalocal systemd-rc-local-generator[8917]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:20:02 np0005550138.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 08 09:20:03 np0005550138.novalocal sudo[8093]: pam_unix(sudo:session): session closed for user root
Dec 08 09:20:04 np0005550138.novalocal python3[10813]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-b914-8a75-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:20:05 np0005550138.novalocal kernel: evm: overlay not supported
Dec 08 09:20:05 np0005550138.novalocal systemd[4300]: Starting D-Bus User Message Bus...
Dec 08 09:20:05 np0005550138.novalocal dbus-broker-launch[12038]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 08 09:20:05 np0005550138.novalocal dbus-broker-launch[12038]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 08 09:20:05 np0005550138.novalocal systemd[4300]: Started D-Bus User Message Bus.
Dec 08 09:20:05 np0005550138.novalocal dbus-broker-lau[12038]: Ready
Dec 08 09:20:05 np0005550138.novalocal systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 08 09:20:05 np0005550138.novalocal systemd[4300]: Created slice Slice /user.
Dec 08 09:20:05 np0005550138.novalocal systemd[4300]: podman-11919.scope: unit configures an IP firewall, but not running as root.
Dec 08 09:20:05 np0005550138.novalocal systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Dec 08 09:20:05 np0005550138.novalocal systemd[4300]: Started podman-11919.scope.
Dec 08 09:20:05 np0005550138.novalocal systemd[4300]: Started podman-pause-8dd5d2e0.scope.
Dec 08 09:20:06 np0005550138.novalocal sudo[13092]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryxywvlofqyfqghwcikegucozmokhmgh ; /usr/bin/python3'
Dec 08 09:20:06 np0005550138.novalocal sudo[13092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:20:06 np0005550138.novalocal python3[13111]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.113:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.113:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:20:06 np0005550138.novalocal python3[13111]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 08 09:20:06 np0005550138.novalocal sudo[13092]: pam_unix(sudo:session): session closed for user root
Dec 08 09:20:06 np0005550138.novalocal sshd-session[8069]: Connection closed by 38.102.83.114 port 42864
Dec 08 09:20:06 np0005550138.novalocal sshd-session[8066]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:20:06 np0005550138.novalocal systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Dec 08 09:20:06 np0005550138.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Dec 08 09:20:06 np0005550138.novalocal systemd[1]: session-6.scope: Consumed 1min 11.210s CPU time.
Dec 08 09:20:06 np0005550138.novalocal systemd-logind[795]: Removed session 6.
Dec 08 09:20:27 np0005550138.novalocal sshd-session[23329]: Connection closed by 38.102.83.192 port 58614 [preauth]
Dec 08 09:20:27 np0005550138.novalocal sshd-session[23326]: Connection closed by 38.102.83.192 port 58618 [preauth]
Dec 08 09:20:27 np0005550138.novalocal sshd-session[23336]: Unable to negotiate with 38.102.83.192 port 58634: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 08 09:20:27 np0005550138.novalocal sshd-session[23333]: Unable to negotiate with 38.102.83.192 port 58638: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 08 09:20:27 np0005550138.novalocal sshd-session[23331]: Unable to negotiate with 38.102.83.192 port 58644: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 08 09:20:32 np0005550138.novalocal sshd-session[25862]: Accepted publickey for zuul from 38.102.83.114 port 49756 ssh2: RSA SHA256:9ILqrOWgZKsXCAP6ek0P69EdElsz9g1+oVZcuSDpYrI
Dec 08 09:20:32 np0005550138.novalocal systemd-logind[795]: New session 7 of user zuul.
Dec 08 09:20:32 np0005550138.novalocal systemd[1]: Started Session 7 of User zuul.
Dec 08 09:20:32 np0005550138.novalocal sshd-session[25862]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:20:32 np0005550138.novalocal python3[26001]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNodgDEEVbs9D+eDo6354ceaXxTqfvK3Z/cF5DrtyS1CjWtL7DbY6RVV+akTh6jQVHA4k5uRzHYDQ4i2DnhKCz8= zuul@np0005550136.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:20:32 np0005550138.novalocal sudo[26237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjswejewcltpotdployrkohhnkryboef ; /usr/bin/python3'
Dec 08 09:20:32 np0005550138.novalocal sudo[26237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:20:33 np0005550138.novalocal python3[26249]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNodgDEEVbs9D+eDo6354ceaXxTqfvK3Z/cF5DrtyS1CjWtL7DbY6RVV+akTh6jQVHA4k5uRzHYDQ4i2DnhKCz8= zuul@np0005550136.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:20:33 np0005550138.novalocal sudo[26237]: pam_unix(sudo:session): session closed for user root
Dec 08 09:20:33 np0005550138.novalocal sudo[26746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jldccopxtcaogcvppunfiakktkeeyxuo ; /usr/bin/python3'
Dec 08 09:20:33 np0005550138.novalocal sudo[26746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:20:34 np0005550138.novalocal python3[26754]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005550138.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 08 09:20:34 np0005550138.novalocal useradd[26843]: new group: name=cloud-admin, GID=1002
Dec 08 09:20:34 np0005550138.novalocal useradd[26843]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 08 09:20:34 np0005550138.novalocal sudo[26746]: pam_unix(sudo:session): session closed for user root
Dec 08 09:20:34 np0005550138.novalocal sudo[27025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzephbjbujgdxcyfgzlvsyanxtuwjlrp ; /usr/bin/python3'
Dec 08 09:20:34 np0005550138.novalocal sudo[27025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:20:34 np0005550138.novalocal python3[27037]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNodgDEEVbs9D+eDo6354ceaXxTqfvK3Z/cF5DrtyS1CjWtL7DbY6RVV+akTh6jQVHA4k5uRzHYDQ4i2DnhKCz8= zuul@np0005550136.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 08 09:20:34 np0005550138.novalocal sudo[27025]: pam_unix(sudo:session): session closed for user root
Dec 08 09:20:34 np0005550138.novalocal sudo[27341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihwtqppnvmkmfilfyakqouavqbeizjzw ; /usr/bin/python3'
Dec 08 09:20:34 np0005550138.novalocal sudo[27341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:20:34 np0005550138.novalocal python3[27355]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:20:34 np0005550138.novalocal sudo[27341]: pam_unix(sudo:session): session closed for user root
Dec 08 09:20:35 np0005550138.novalocal sudo[27657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnfpmbhnnecobikgnjznwtdhtgkkrztq ; /usr/bin/python3'
Dec 08 09:20:35 np0005550138.novalocal sudo[27657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:20:35 np0005550138.novalocal python3[27671]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765185634.6794038-168-12482705252292/source _original_basename=tmphs4f5jh0 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:20:35 np0005550138.novalocal sudo[27657]: pam_unix(sudo:session): session closed for user root
Dec 08 09:20:36 np0005550138.novalocal sudo[28115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eseijsjxjdtmjeytyabhchwtlaspkprl ; /usr/bin/python3'
Dec 08 09:20:36 np0005550138.novalocal sudo[28115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:20:36 np0005550138.novalocal python3[28124]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Dec 08 09:20:36 np0005550138.novalocal systemd[1]: Starting Hostname Service...
Dec 08 09:20:36 np0005550138.novalocal systemd[1]: Started Hostname Service.
Dec 08 09:20:36 np0005550138.novalocal systemd-hostnamed[28258]: Changed pretty hostname to 'compute-1'
Dec 08 09:20:36 compute-1 systemd-hostnamed[28258]: Hostname set to <compute-1> (static)
Dec 08 09:20:36 compute-1 NetworkManager[7192]: <info>  [1765185636.5001] hostname: static hostname changed from "np0005550138.novalocal" to "compute-1"
Dec 08 09:20:36 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 08 09:20:36 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 08 09:20:36 compute-1 sudo[28115]: pam_unix(sudo:session): session closed for user root
Dec 08 09:20:37 compute-1 sshd-session[25932]: Connection closed by 38.102.83.114 port 49756
Dec 08 09:20:37 compute-1 sshd-session[25862]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:20:37 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Dec 08 09:20:37 compute-1 systemd[1]: session-7.scope: Consumed 2.270s CPU time.
Dec 08 09:20:37 compute-1 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Dec 08 09:20:37 compute-1 systemd-logind[795]: Removed session 7.
Dec 08 09:20:40 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 08 09:20:40 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 08 09:20:40 compute-1 systemd[1]: man-db-cache-update.service: Consumed 46.918s CPU time.
Dec 08 09:20:40 compute-1 systemd[1]: run-rc9ed6447715d499b80b6404c58a842d7.service: Deactivated successfully.
Dec 08 09:20:46 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 08 09:21:06 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 08 09:24:15 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 08 09:24:15 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 08 09:24:15 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 08 09:24:15 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 08 09:24:16 compute-1 sshd-session[29985]: Accepted publickey for zuul from 38.102.83.192 port 56934 ssh2: RSA SHA256:9ILqrOWgZKsXCAP6ek0P69EdElsz9g1+oVZcuSDpYrI
Dec 08 09:24:16 compute-1 systemd-logind[795]: New session 8 of user zuul.
Dec 08 09:24:16 compute-1 systemd[1]: Started Session 8 of User zuul.
Dec 08 09:24:16 compute-1 sshd-session[29985]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:24:16 compute-1 python3[30061]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:24:18 compute-1 sudo[30175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvlznfaxhdclzhasihftusujuyjqrlwm ; /usr/bin/python3'
Dec 08 09:24:18 compute-1 sudo[30175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:18 compute-1 python3[30177]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:24:18 compute-1 sudo[30175]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:18 compute-1 sudo[30248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frttmnajiyiktctutyxoogtmpoygpgei ; /usr/bin/python3'
Dec 08 09:24:18 compute-1 sudo[30248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:19 compute-1 python3[30250]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2625337-33980-247447142145996/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:24:19 compute-1 sudo[30248]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:19 compute-1 sudo[30274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynytuheginxrgkiclusysoswudrsmwhc ; /usr/bin/python3'
Dec 08 09:24:19 compute-1 sudo[30274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:19 compute-1 python3[30276]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:24:19 compute-1 sudo[30274]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:19 compute-1 sudo[30347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmfvqdraubfobvyscpssmdychffrcfto ; /usr/bin/python3'
Dec 08 09:24:19 compute-1 sudo[30347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:19 compute-1 python3[30349]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2625337-33980-247447142145996/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:24:19 compute-1 sudo[30347]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:19 compute-1 sudo[30373]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyrbjbsrlqbdvhabnxrkesjvslgxbuln ; /usr/bin/python3'
Dec 08 09:24:19 compute-1 sudo[30373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:19 compute-1 python3[30375]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:24:19 compute-1 sudo[30373]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:20 compute-1 sudo[30446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esnnecrwqvcxyyxbsnnlhvccmeappuvq ; /usr/bin/python3'
Dec 08 09:24:20 compute-1 sudo[30446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:20 compute-1 python3[30448]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2625337-33980-247447142145996/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:24:20 compute-1 sudo[30446]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:20 compute-1 sudo[30472]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxmhnyemcyelaanzolbjgzmoeiuqzuro ; /usr/bin/python3'
Dec 08 09:24:20 compute-1 sudo[30472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:20 compute-1 python3[30474]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:24:20 compute-1 sudo[30472]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:20 compute-1 sudo[30545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqavoescldzyirecipkrlmhqpuhbdhey ; /usr/bin/python3'
Dec 08 09:24:20 compute-1 sudo[30545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:20 compute-1 python3[30547]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2625337-33980-247447142145996/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:24:20 compute-1 sudo[30545]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:21 compute-1 sudo[30571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmuwugvmqnuiuqjtzhjqhdjpwqlpcpjo ; /usr/bin/python3'
Dec 08 09:24:21 compute-1 sudo[30571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:21 compute-1 python3[30573]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:24:21 compute-1 sudo[30571]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:21 compute-1 sudo[30644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vupffqhbjoiwrpehflbsoxpvmdeiursi ; /usr/bin/python3'
Dec 08 09:24:21 compute-1 sudo[30644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:21 compute-1 python3[30646]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2625337-33980-247447142145996/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:24:21 compute-1 sudo[30644]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:21 compute-1 sudo[30670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qptgrodbtqnknbmhgszkcbzgjhgljhxb ; /usr/bin/python3'
Dec 08 09:24:21 compute-1 sudo[30670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:21 compute-1 python3[30672]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:24:21 compute-1 sudo[30670]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:21 compute-1 sudo[30743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwooshnwgflixukubfzkfzgipmvgxbnq ; /usr/bin/python3'
Dec 08 09:24:21 compute-1 sudo[30743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:22 compute-1 python3[30745]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2625337-33980-247447142145996/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:24:22 compute-1 sudo[30743]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:22 compute-1 sudo[30769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrufytuodeprhivrlumenyfzvnbmaxim ; /usr/bin/python3'
Dec 08 09:24:22 compute-1 sudo[30769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:22 compute-1 python3[30771]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:24:22 compute-1 sudo[30769]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:22 compute-1 sudo[30842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jonrytlpjeecjwspgiggzymwfheakqmy ; /usr/bin/python3'
Dec 08 09:24:22 compute-1 sudo[30842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:24:22 compute-1 python3[30844]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2625337-33980-247447142145996/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:24:22 compute-1 sudo[30842]: pam_unix(sudo:session): session closed for user root
Dec 08 09:24:34 compute-1 python3[30892]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:25:38 compute-1 sshd-session[30895]: Invalid user  from 129.212.180.235 port 33984
Dec 08 09:25:46 compute-1 sshd-session[30895]: Connection closed by invalid user  129.212.180.235 port 33984 [preauth]
Dec 08 09:29:34 compute-1 sshd-session[29988]: Received disconnect from 38.102.83.192 port 56934:11: disconnected by user
Dec 08 09:29:34 compute-1 sshd-session[29988]: Disconnected from user zuul 38.102.83.192 port 56934
Dec 08 09:29:34 compute-1 sshd-session[29985]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:29:34 compute-1 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Dec 08 09:29:34 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Dec 08 09:29:34 compute-1 systemd[1]: session-8.scope: Consumed 5.230s CPU time.
Dec 08 09:29:34 compute-1 systemd-logind[795]: Removed session 8.
Dec 08 09:30:42 compute-1 sshd-session[30901]: Received disconnect from 79.32.212.213 port 53774:11: Bye Bye [preauth]
Dec 08 09:30:42 compute-1 sshd-session[30901]: Disconnected from authenticating user root 79.32.212.213 port 53774 [preauth]
Dec 08 09:31:05 compute-1 sshd-session[30903]: Received disconnect from 103.14.32.75 port 55424:11:  [preauth]
Dec 08 09:31:05 compute-1 sshd-session[30903]: Disconnected from authenticating user root 103.14.32.75 port 55424 [preauth]
Dec 08 09:31:37 compute-1 sshd-session[30905]: Invalid user ribbon from 180.76.105.69 port 52114
Dec 08 09:31:37 compute-1 sshd-session[30905]: Received disconnect from 180.76.105.69 port 52114:11: Bye Bye [preauth]
Dec 08 09:31:37 compute-1 sshd-session[30905]: Disconnected from invalid user ribbon 180.76.105.69 port 52114 [preauth]
Dec 08 09:33:34 compute-1 sshd-session[30909]: Received disconnect from 103.191.92.236 port 38944:11: Bye Bye [preauth]
Dec 08 09:33:34 compute-1 sshd-session[30909]: Disconnected from authenticating user root 103.191.92.236 port 38944 [preauth]
Dec 08 09:33:43 compute-1 sshd[1006]: Timeout before authentication for connection from 14.103.130.89 to 38.102.83.181, pid = 30907
Dec 08 09:34:36 compute-1 sshd-session[30911]: Invalid user redis from 95.128.196.223 port 42186
Dec 08 09:34:36 compute-1 sshd-session[30911]: Received disconnect from 95.128.196.223 port 42186:11: Bye Bye [preauth]
Dec 08 09:34:36 compute-1 sshd-session[30911]: Disconnected from invalid user redis 95.128.196.223 port 42186 [preauth]
Dec 08 09:34:38 compute-1 sshd-session[30913]: Received disconnect from 79.32.212.213 port 51534:11: Bye Bye [preauth]
Dec 08 09:34:38 compute-1 sshd-session[30913]: Disconnected from authenticating user root 79.32.212.213 port 51534 [preauth]
Dec 08 09:35:47 compute-1 sshd-session[30917]: Received disconnect from 180.76.105.69 port 58180:11: Bye Bye [preauth]
Dec 08 09:35:47 compute-1 sshd-session[30917]: Disconnected from authenticating user root 180.76.105.69 port 58180 [preauth]
Dec 08 09:35:54 compute-1 sshd-session[30919]: Received disconnect from 103.191.92.236 port 43928:11: Bye Bye [preauth]
Dec 08 09:35:54 compute-1 sshd-session[30919]: Disconnected from authenticating user root 103.191.92.236 port 43928 [preauth]
Dec 08 09:36:03 compute-1 sshd-session[30921]: Received disconnect from 79.32.212.213 port 42448:11: Bye Bye [preauth]
Dec 08 09:36:03 compute-1 sshd-session[30921]: Disconnected from authenticating user root 79.32.212.213 port 42448 [preauth]
Dec 08 09:36:10 compute-1 sshd-session[30924]: Accepted publickey for zuul from 192.168.122.30 port 34394 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:36:10 compute-1 systemd-logind[795]: New session 9 of user zuul.
Dec 08 09:36:11 compute-1 systemd[1]: Started Session 9 of User zuul.
Dec 08 09:36:11 compute-1 sshd-session[30924]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:36:12 compute-1 python3.9[31077]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:36:13 compute-1 sudo[31256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktqgdnvlwpbibqrhssbnehjnxvlelrjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186572.8843384-58-201888697159097/AnsiballZ_command.py'
Dec 08 09:36:13 compute-1 sudo[31256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:13 compute-1 python3.9[31258]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:36:20 compute-1 sudo[31256]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:21 compute-1 sshd-session[30927]: Connection closed by 192.168.122.30 port 34394
Dec 08 09:36:21 compute-1 sshd-session[30924]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:36:21 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Dec 08 09:36:21 compute-1 systemd[1]: session-9.scope: Consumed 8.253s CPU time.
Dec 08 09:36:21 compute-1 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Dec 08 09:36:21 compute-1 systemd-logind[795]: Removed session 9.
Dec 08 09:36:26 compute-1 sshd-session[31315]: Invalid user jenkins from 95.128.196.223 port 33824
Dec 08 09:36:26 compute-1 sshd-session[31315]: Received disconnect from 95.128.196.223 port 33824:11: Bye Bye [preauth]
Dec 08 09:36:26 compute-1 sshd-session[31315]: Disconnected from invalid user jenkins 95.128.196.223 port 33824 [preauth]
Dec 08 09:36:37 compute-1 sshd-session[31317]: Accepted publickey for zuul from 192.168.122.30 port 47404 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:36:37 compute-1 systemd-logind[795]: New session 10 of user zuul.
Dec 08 09:36:37 compute-1 systemd[1]: Started Session 10 of User zuul.
Dec 08 09:36:37 compute-1 sshd-session[31317]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:36:38 compute-1 python3.9[31470]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 08 09:36:39 compute-1 python3.9[31644]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:36:40 compute-1 sudo[31794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyonxesrmvpfswaipqvrjshxntuzpuyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186599.7709243-94-243040081602825/AnsiballZ_command.py'
Dec 08 09:36:40 compute-1 sudo[31794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:40 compute-1 python3.9[31796]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:36:40 compute-1 sudo[31794]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:41 compute-1 sudo[31947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytacxpopkcmfgxucquxadvliguccsvsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186600.913208-130-67871490714966/AnsiballZ_stat.py'
Dec 08 09:36:41 compute-1 sudo[31947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:41 compute-1 python3.9[31949]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:36:41 compute-1 sudo[31947]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:42 compute-1 sudo[32099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbjfrpxeiuffepdygiipqpgkihmbhdhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186602.0183318-154-239841367899927/AnsiballZ_file.py'
Dec 08 09:36:42 compute-1 sudo[32099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:42 compute-1 python3.9[32101]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:36:42 compute-1 sudo[32099]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:43 compute-1 sudo[32251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yltmhjyyeoehsevueyqudssdzyoobltk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186602.871206-178-47258714242017/AnsiballZ_stat.py'
Dec 08 09:36:43 compute-1 sudo[32251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:43 compute-1 python3.9[32253]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:36:43 compute-1 sudo[32251]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:43 compute-1 sudo[32374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfldjfzxiftwcwkwrkytdcizwaytzmhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186602.871206-178-47258714242017/AnsiballZ_copy.py'
Dec 08 09:36:43 compute-1 sudo[32374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:44 compute-1 python3.9[32376]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186602.871206-178-47258714242017/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:36:44 compute-1 sudo[32374]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:44 compute-1 sudo[32526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhhgxsbpmsywbmcbjjhlfyjzknhcihpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186604.2506998-223-21644621253079/AnsiballZ_setup.py'
Dec 08 09:36:44 compute-1 sudo[32526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:44 compute-1 python3.9[32528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:36:45 compute-1 sudo[32526]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:45 compute-1 sudo[32682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ranwlmdkrtawgyyjmvyeeejzrkgrtlry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186605.2783294-247-26347622284843/AnsiballZ_file.py'
Dec 08 09:36:45 compute-1 sudo[32682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:45 compute-1 python3.9[32684]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:36:45 compute-1 sudo[32682]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:46 compute-1 sudo[32834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihypyddjxeedmylsucgnkzddvpoosyfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186606.0805252-274-62044690267148/AnsiballZ_file.py'
Dec 08 09:36:46 compute-1 sudo[32834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:46 compute-1 python3.9[32836]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:36:46 compute-1 sudo[32834]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:47 compute-1 python3.9[32986]: ansible-ansible.builtin.service_facts Invoked
Dec 08 09:36:53 compute-1 sshd[1006]: Timeout before authentication for connection from 120.48.123.76 to 38.102.83.181, pid = 30915
Dec 08 09:36:53 compute-1 python3.9[33239]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:36:54 compute-1 python3.9[33389]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:36:55 compute-1 python3.9[33543]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:36:56 compute-1 sudo[33699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djilgxdnokrnrismgtppvikhjcjwgefy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186616.1522834-419-141332689067133/AnsiballZ_setup.py'
Dec 08 09:36:56 compute-1 sudo[33699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:56 compute-1 python3.9[33701]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 08 09:36:57 compute-1 sudo[33699]: pam_unix(sudo:session): session closed for user root
Dec 08 09:36:57 compute-1 sudo[33783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavjsornsdlgepfaixnxjzowqzuijhqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186616.1522834-419-141332689067133/AnsiballZ_dnf.py'
Dec 08 09:36:57 compute-1 sudo[33783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:36:57 compute-1 python3.9[33785]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:37:25 compute-1 sshd-session[33901]: Connection closed by 150.5.169.176 port 54470 [preauth]
Dec 08 09:37:32 compute-1 sshd-session[33933]: Received disconnect from 79.32.212.213 port 47518:11: Bye Bye [preauth]
Dec 08 09:37:32 compute-1 sshd-session[33933]: Disconnected from authenticating user root 79.32.212.213 port 47518 [preauth]
Dec 08 09:37:36 compute-1 sshd-session[33935]: Received disconnect from 103.191.92.236 port 43950:11: Bye Bye [preauth]
Dec 08 09:37:36 compute-1 sshd-session[33935]: Disconnected from authenticating user root 103.191.92.236 port 43950 [preauth]
Dec 08 09:37:47 compute-1 systemd[1]: Reloading.
Dec 08 09:37:47 compute-1 systemd-rc-local-generator[33992]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:37:47 compute-1 systemd[1]: Starting dnf makecache...
Dec 08 09:37:47 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 08 09:37:48 compute-1 dnf[34001]: Failed determining last makecache time.
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-barbican-42b4c41831408a8e323 150 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 183 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-cinder-1c00d6490d88e436f26ef 170 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-python-stevedore-c4acc5639fd2329372142 174 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-python-cloudkitty-tests-tempest-2c80f8 174 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-os-refresh-config-9bfc52b5049be2d8de61 186 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 183 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-python-designate-tests-tempest-347fdbc 172 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-glance-1fd12c29b339f30fe823e 169 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 190 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-manila-3c01b7181572c95dac462 199 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-python-whitebox-neutron-tests-tempest- 187 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-octavia-ba397f07a7331190208c 181 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-watcher-c014f81a8647287f6dcc 166 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 systemd[1]: Reloading.
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-ansible-config_template-5ccaa22121a7ff 178 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 176 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-swift-dc98a8463506ac520c469a 194 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-python-tempestconf-8515371b7cceebd4282 213 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 dnf[34001]: delorean-openstack-heat-ui-013accbfd179753bc3f0 194 kB/s | 3.0 kB     00:00
Dec 08 09:37:48 compute-1 systemd-rc-local-generator[34047]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:37:48 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 08 09:37:48 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 08 09:37:48 compute-1 dnf[34001]: CentOS Stream 9 - BaseOS                         30 kB/s | 7.3 kB     00:00
Dec 08 09:37:48 compute-1 systemd[1]: Reloading.
Dec 08 09:37:48 compute-1 systemd-rc-local-generator[34087]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:37:48 compute-1 dnf[34001]: CentOS Stream 9 - AppStream                      71 kB/s | 7.4 kB     00:00
Dec 08 09:37:48 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 08 09:37:49 compute-1 dnf[34001]: CentOS Stream 9 - CRB                            68 kB/s | 7.2 kB     00:00
Dec 08 09:37:49 compute-1 dbus-broker-launch[736]: Noticed file-system modification, trigger reload.
Dec 08 09:37:49 compute-1 dbus-broker-launch[736]: Noticed file-system modification, trigger reload.
Dec 08 09:37:49 compute-1 dbus-broker-launch[736]: Noticed file-system modification, trigger reload.
Dec 08 09:37:49 compute-1 dnf[34001]: CentOS Stream 9 - Extras packages                29 kB/s | 8.3 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: dlrn-antelope-testing                           178 kB/s | 3.0 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: dlrn-antelope-build-deps                        195 kB/s | 3.0 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: centos9-rabbitmq                                132 kB/s | 3.0 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: centos9-storage                                 151 kB/s | 3.0 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: centos9-opstools                                129 kB/s | 3.0 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: NFV SIG OpenvSwitch                             121 kB/s | 3.0 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: repo-setup-centos-appstream                     201 kB/s | 4.4 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: repo-setup-centos-baseos                        159 kB/s | 3.9 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: repo-setup-centos-highavailability              191 kB/s | 3.9 kB     00:00
Dec 08 09:37:49 compute-1 dnf[34001]: repo-setup-centos-powertools                    177 kB/s | 4.3 kB     00:00
Dec 08 09:37:50 compute-1 dnf[34001]: Extra Packages for Enterprise Linux 9 - x86_64  183 kB/s |  34 kB     00:00
Dec 08 09:37:50 compute-1 dnf[34001]: Metadata cache created.
Dec 08 09:37:50 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 08 09:37:50 compute-1 systemd[1]: Finished dnf makecache.
Dec 08 09:37:50 compute-1 systemd[1]: dnf-makecache.service: Consumed 1.820s CPU time.
Dec 08 09:37:53 compute-1 sshd-session[34142]: Received disconnect from 95.128.196.223 port 34182:11: Bye Bye [preauth]
Dec 08 09:37:53 compute-1 sshd-session[34142]: Disconnected from authenticating user root 95.128.196.223 port 34182 [preauth]
Dec 08 09:38:01 compute-1 sshd[1006]: drop connection #0 from [120.48.123.76]:58308 on [38.102.83.181]:22 penalty: exceeded LoginGraceTime
Dec 08 09:38:53 compute-1 kernel: SELinux:  Converting 2718 SID table entries...
Dec 08 09:38:53 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 08 09:38:53 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 08 09:38:53 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 08 09:38:53 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 08 09:38:53 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 08 09:38:53 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 08 09:38:53 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 08 09:38:53 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 08 09:38:53 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 08 09:38:53 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 08 09:38:53 compute-1 systemd[1]: Reloading.
Dec 08 09:38:53 compute-1 systemd-rc-local-generator[34444]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:38:53 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 08 09:38:54 compute-1 sudo[33783]: pam_unix(sudo:session): session closed for user root
Dec 08 09:38:54 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 08 09:38:54 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 08 09:38:54 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.199s CPU time.
Dec 08 09:38:54 compute-1 systemd[1]: run-r5058870c478143f09cdb17840cdb7090.service: Deactivated successfully.
Dec 08 09:38:54 compute-1 sudo[35354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvfrnbwwqcmmegbewaiqivpzrnykcuhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186734.3545277-455-55313678169672/AnsiballZ_command.py'
Dec 08 09:38:54 compute-1 sudo[35354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:38:54 compute-1 python3.9[35356]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:38:55 compute-1 sudo[35354]: pam_unix(sudo:session): session closed for user root
Dec 08 09:38:56 compute-1 sudo[35635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmzpncqkxfawzzelznzicivxidnmasio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186735.9256597-478-207119282783491/AnsiballZ_selinux.py'
Dec 08 09:38:56 compute-1 sudo[35635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:38:56 compute-1 python3.9[35637]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 08 09:38:56 compute-1 sudo[35635]: pam_unix(sudo:session): session closed for user root
Dec 08 09:38:57 compute-1 sudo[35787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfdgqhnhyqttleifxrxrrcgmpujkgugf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186737.2981944-511-73965746942774/AnsiballZ_command.py'
Dec 08 09:38:57 compute-1 sudo[35787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:38:57 compute-1 python3.9[35789]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 08 09:38:58 compute-1 sshd-session[35791]: Received disconnect from 79.32.212.213 port 39030:11: Bye Bye [preauth]
Dec 08 09:38:58 compute-1 sshd-session[35791]: Disconnected from authenticating user root 79.32.212.213 port 39030 [preauth]
Dec 08 09:38:58 compute-1 sudo[35787]: pam_unix(sudo:session): session closed for user root
Dec 08 09:38:59 compute-1 sudo[35942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blptmbdohhjeedxxttmpsmhhuatkwpcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186739.2752218-535-139007670249974/AnsiballZ_file.py'
Dec 08 09:38:59 compute-1 sudo[35942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:00 compute-1 python3.9[35944]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:39:00 compute-1 sudo[35942]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:05 compute-1 sudo[36095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhltnryaxvxlzhxivqhekznqsencrbdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186744.9261007-559-159269694655571/AnsiballZ_mount.py'
Dec 08 09:39:05 compute-1 sudo[36095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:07 compute-1 python3.9[36097]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 08 09:39:07 compute-1 sudo[36095]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:09 compute-1 sudo[36247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxypueylqsajgodoobipobxywzncwbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186748.9558992-643-171925504891891/AnsiballZ_file.py'
Dec 08 09:39:09 compute-1 sudo[36247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:09 compute-1 python3.9[36249]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:39:09 compute-1 sudo[36247]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:09 compute-1 sudo[36399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boufnsklljhanoaofjrfufyqqfuoslka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186749.6473553-667-114654605775103/AnsiballZ_stat.py'
Dec 08 09:39:09 compute-1 sudo[36399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:10 compute-1 python3.9[36401]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:39:10 compute-1 sudo[36399]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:10 compute-1 sudo[36522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgkysbvhiiviujmebtujoyftpyuveypy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186749.6473553-667-114654605775103/AnsiballZ_copy.py'
Dec 08 09:39:10 compute-1 sudo[36522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:10 compute-1 python3.9[36524]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186749.6473553-667-114654605775103/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=747873c1ecad1b42bf7284bb8d89d0dfb93dcb85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:39:10 compute-1 sudo[36522]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:12 compute-1 sudo[36674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilybcmdvejgroudgbvscwektmathjsqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186752.306492-739-200406050557240/AnsiballZ_stat.py'
Dec 08 09:39:12 compute-1 sudo[36674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:12 compute-1 python3.9[36676]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:39:12 compute-1 sudo[36674]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:13 compute-1 sudo[36826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ullnufmryifjdgoaoyvgtwjopetreert ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186753.035282-763-123857461060352/AnsiballZ_command.py'
Dec 08 09:39:13 compute-1 sudo[36826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:13 compute-1 python3.9[36828]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:39:13 compute-1 sudo[36826]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:14 compute-1 sudo[36979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrfrpabkhfhogvrcmacqtafopwfznjkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186753.8838968-787-29945809902501/AnsiballZ_file.py'
Dec 08 09:39:14 compute-1 sudo[36979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:14 compute-1 python3.9[36981]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:39:14 compute-1 sudo[36979]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:15 compute-1 sudo[37131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdkyeoeqyghwpazdjgiuecyuoapgvyxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186754.895165-820-196251494123108/AnsiballZ_getent.py'
Dec 08 09:39:15 compute-1 sudo[37131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:15 compute-1 python3.9[37133]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 08 09:39:15 compute-1 sudo[37131]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:15 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 08 09:39:16 compute-1 sudo[37285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luusoptjbvyomgilzoesjrulxrrihnkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186755.7346373-844-113783321762246/AnsiballZ_group.py'
Dec 08 09:39:16 compute-1 sudo[37285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:16 compute-1 python3.9[37287]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 08 09:39:16 compute-1 groupadd[37290]: group added to /etc/group: name=qemu, GID=107
Dec 08 09:39:16 compute-1 groupadd[37290]: group added to /etc/gshadow: name=qemu
Dec 08 09:39:16 compute-1 groupadd[37290]: new group: name=qemu, GID=107
Dec 08 09:39:16 compute-1 sudo[37285]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:17 compute-1 sudo[37445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjzrzltxrjzwpzqqfdwvbajeeueebtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186756.7369924-868-72584567727539/AnsiballZ_user.py'
Dec 08 09:39:17 compute-1 sudo[37445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:17 compute-1 python3.9[37447]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 08 09:39:17 compute-1 useradd[37449]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 08 09:39:17 compute-1 sudo[37445]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:17 compute-1 sshd-session[37288]: Received disconnect from 95.128.196.223 port 43634:11: Bye Bye [preauth]
Dec 08 09:39:17 compute-1 sshd-session[37288]: Disconnected from authenticating user root 95.128.196.223 port 43634 [preauth]
Dec 08 09:39:18 compute-1 sudo[37605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpjmtcdtjzjfaqhqcpsftufllkkfrmaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186757.7597911-892-224365314720456/AnsiballZ_getent.py'
Dec 08 09:39:18 compute-1 sudo[37605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:18 compute-1 python3.9[37607]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 08 09:39:18 compute-1 sudo[37605]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:18 compute-1 sudo[37758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljkwlqccvinxhgclkyewbtbeplkluses ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186758.5009973-916-31018638731601/AnsiballZ_group.py'
Dec 08 09:39:18 compute-1 sudo[37758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:18 compute-1 python3.9[37760]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 08 09:39:19 compute-1 groupadd[37761]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 08 09:39:19 compute-1 groupadd[37761]: group added to /etc/gshadow: name=hugetlbfs
Dec 08 09:39:19 compute-1 groupadd[37761]: new group: name=hugetlbfs, GID=42477
Dec 08 09:39:19 compute-1 sudo[37758]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:19 compute-1 irqbalance[791]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 08 09:39:19 compute-1 irqbalance[791]: IRQ 26 affinity is now unmanaged
Dec 08 09:39:19 compute-1 sudo[37916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewcbaeswgihvxddxmnqvmiibgrglyqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186759.4163027-943-105075832534283/AnsiballZ_file.py'
Dec 08 09:39:19 compute-1 sudo[37916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:19 compute-1 python3.9[37918]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 08 09:39:19 compute-1 sudo[37916]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:20 compute-1 sudo[38068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqdktukvdbynirwtugmlpdwigukjuilf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186760.3777864-976-83227945817882/AnsiballZ_dnf.py'
Dec 08 09:39:20 compute-1 sudo[38068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:20 compute-1 python3.9[38070]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:39:22 compute-1 sudo[38068]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:22 compute-1 sudo[38221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hindtmhvlowjiljgxznybkgjotwioety ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186762.7519438-1000-78117629238468/AnsiballZ_file.py'
Dec 08 09:39:22 compute-1 sudo[38221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:23 compute-1 python3.9[38223]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:39:23 compute-1 sudo[38221]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:23 compute-1 sudo[38373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prfboewjuqecrmfpjgwhkaygymhnpeio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186763.4736392-1024-15986688171596/AnsiballZ_stat.py'
Dec 08 09:39:23 compute-1 sudo[38373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:23 compute-1 python3.9[38375]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:39:23 compute-1 sudo[38373]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:24 compute-1 sudo[38496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npaxykhmjibihecjjkmlhnibtkffzyew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186763.4736392-1024-15986688171596/AnsiballZ_copy.py'
Dec 08 09:39:24 compute-1 sudo[38496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:24 compute-1 python3.9[38498]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186763.4736392-1024-15986688171596/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:39:24 compute-1 sudo[38496]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:25 compute-1 sudo[38648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlitpuvlyrqfsbbzhdmfewtozoyucusf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186764.745925-1069-2751615164517/AnsiballZ_systemd.py'
Dec 08 09:39:25 compute-1 sudo[38648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:25 compute-1 python3.9[38650]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 08 09:39:25 compute-1 systemd[1]: Starting Load Kernel Modules...
Dec 08 09:39:25 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 08 09:39:25 compute-1 kernel: Bridge firewalling registered
Dec 08 09:39:25 compute-1 systemd-modules-load[38654]: Inserted module 'br_netfilter'
Dec 08 09:39:25 compute-1 systemd[1]: Finished Load Kernel Modules.
Dec 08 09:39:25 compute-1 sudo[38648]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:26 compute-1 sudo[38809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhpttdbjizrxjnflepdvuszrimvtnzcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186766.0498073-1093-15166431019178/AnsiballZ_stat.py'
Dec 08 09:39:26 compute-1 sudo[38809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:26 compute-1 python3.9[38811]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:39:26 compute-1 sudo[38809]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:26 compute-1 sudo[38934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebfezegpqylbomcavkvbusjfgiffylfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186766.0498073-1093-15166431019178/AnsiballZ_copy.py'
Dec 08 09:39:26 compute-1 sudo[38934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:27 compute-1 python3.9[38936]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186766.0498073-1093-15166431019178/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:39:27 compute-1 sudo[38934]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:28 compute-1 sudo[39086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcualnssnyygoneulutaltczkefrlmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186767.7384992-1147-106126641881276/AnsiballZ_dnf.py'
Dec 08 09:39:28 compute-1 sudo[39086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:28 compute-1 python3.9[39088]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:39:28 compute-1 sshd-session[38906]: Received disconnect from 103.191.92.236 port 37214:11: Bye Bye [preauth]
Dec 08 09:39:28 compute-1 sshd-session[38906]: Disconnected from authenticating user root 103.191.92.236 port 37214 [preauth]
Dec 08 09:39:29 compute-1 irqbalance[791]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 08 09:39:29 compute-1 irqbalance[791]: IRQ 27 affinity is now unmanaged
Dec 08 09:39:31 compute-1 dbus-broker-launch[736]: Noticed file-system modification, trigger reload.
Dec 08 09:39:31 compute-1 dbus-broker-launch[736]: Noticed file-system modification, trigger reload.
Dec 08 09:39:32 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 08 09:39:32 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 08 09:39:32 compute-1 systemd[1]: Reloading.
Dec 08 09:39:32 compute-1 systemd-rc-local-generator[39153]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:39:32 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 08 09:39:33 compute-1 sudo[39086]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:34 compute-1 python3.9[40822]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:39:35 compute-1 python3.9[41809]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 08 09:39:35 compute-1 python3.9[42658]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:39:36 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 08 09:39:36 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 08 09:39:36 compute-1 systemd[1]: man-db-cache-update.service: Consumed 4.727s CPU time.
Dec 08 09:39:36 compute-1 systemd[1]: run-r354c51c49a6d4aef87854c653a365ad0.service: Deactivated successfully.
Dec 08 09:39:36 compute-1 sudo[43249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxkvfvdwcypnbommcrdjpwjaqgntqcgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186776.0371633-1264-73746013456558/AnsiballZ_command.py'
Dec 08 09:39:36 compute-1 sudo[43249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:36 compute-1 python3.9[43251]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:39:36 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 08 09:39:37 compute-1 systemd[1]: Starting Authorization Manager...
Dec 08 09:39:37 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 08 09:39:37 compute-1 polkitd[43468]: Started polkitd version 0.117
Dec 08 09:39:37 compute-1 polkitd[43468]: Loading rules from directory /etc/polkit-1/rules.d
Dec 08 09:39:37 compute-1 polkitd[43468]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 08 09:39:37 compute-1 polkitd[43468]: Finished loading, compiling and executing 2 rules
Dec 08 09:39:37 compute-1 polkitd[43468]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 08 09:39:37 compute-1 systemd[1]: Started Authorization Manager.
Dec 08 09:39:37 compute-1 sudo[43249]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:37 compute-1 sudo[43636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aulgcmzqhnbngbfwunszptprgvjkojbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186777.6107695-1291-127483012216226/AnsiballZ_systemd.py'
Dec 08 09:39:37 compute-1 sudo[43636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:38 compute-1 python3.9[43638]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:39:38 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 08 09:39:38 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Dec 08 09:39:38 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 08 09:39:38 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 08 09:39:38 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 08 09:39:38 compute-1 sudo[43636]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:39 compute-1 python3.9[43800]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 08 09:39:43 compute-1 sudo[43950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzyzxdjbbnlhcbvwxegwexqdnqtsytkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186783.1018384-1462-106966641588869/AnsiballZ_systemd.py'
Dec 08 09:39:43 compute-1 sudo[43950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:43 compute-1 python3.9[43952]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:39:43 compute-1 systemd[1]: Reloading.
Dec 08 09:39:43 compute-1 systemd-rc-local-generator[43982]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:39:44 compute-1 sudo[43950]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:44 compute-1 sudo[44139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llsmupxselfpxfnlzxywjvscolvgedhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186784.2128093-1462-166229791167245/AnsiballZ_systemd.py'
Dec 08 09:39:44 compute-1 sudo[44139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:44 compute-1 python3.9[44141]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:39:45 compute-1 systemd[1]: Reloading.
Dec 08 09:39:46 compute-1 systemd-rc-local-generator[44172]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:39:46 compute-1 sudo[44139]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:47 compute-1 sudo[44329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppoogltfgencpzrzthedbrxqpqeqkkps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186786.5683577-1510-93624046442922/AnsiballZ_command.py'
Dec 08 09:39:47 compute-1 sudo[44329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:47 compute-1 python3.9[44331]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:39:47 compute-1 sudo[44329]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:48 compute-1 sudo[44482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nretwrqgvomlokihfsszxgretoadlqbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186787.7028654-1534-264049108527564/AnsiballZ_command.py'
Dec 08 09:39:48 compute-1 sudo[44482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:48 compute-1 python3.9[44484]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:39:48 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 08 09:39:48 compute-1 sudo[44482]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:48 compute-1 sudo[44635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqomsvuvoukurjmuhiyinemowzfkfcvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186788.5436985-1558-196958600218058/AnsiballZ_command.py'
Dec 08 09:39:48 compute-1 sudo[44635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:49 compute-1 python3.9[44637]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:39:50 compute-1 sudo[44635]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:51 compute-1 sudo[44797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vohydbbhmomcnvwztcmzveeewwclijip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186790.7822967-1582-103905355073414/AnsiballZ_command.py'
Dec 08 09:39:51 compute-1 sudo[44797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:51 compute-1 python3.9[44799]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:39:51 compute-1 sudo[44797]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:51 compute-1 sudo[44950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbbetsfxufedznqhtxaqvhpllkncbus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186791.5042095-1606-25123066259830/AnsiballZ_systemd.py'
Dec 08 09:39:51 compute-1 sudo[44950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:39:52 compute-1 python3.9[44952]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 08 09:39:52 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 08 09:39:52 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Dec 08 09:39:52 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Dec 08 09:39:52 compute-1 systemd[1]: Starting Apply Kernel Variables...
Dec 08 09:39:52 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 08 09:39:52 compute-1 systemd[1]: Finished Apply Kernel Variables.
Dec 08 09:39:52 compute-1 sudo[44950]: pam_unix(sudo:session): session closed for user root
Dec 08 09:39:52 compute-1 sshd-session[31320]: Connection closed by 192.168.122.30 port 47404
Dec 08 09:39:52 compute-1 sshd-session[31317]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:39:52 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Dec 08 09:39:52 compute-1 systemd[1]: session-10.scope: Consumed 2min 18.178s CPU time.
Dec 08 09:39:52 compute-1 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Dec 08 09:39:52 compute-1 systemd-logind[795]: Removed session 10.
Dec 08 09:39:58 compute-1 sshd-session[44982]: Accepted publickey for zuul from 192.168.122.30 port 57058 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:39:58 compute-1 systemd-logind[795]: New session 11 of user zuul.
Dec 08 09:39:58 compute-1 systemd[1]: Started Session 11 of User zuul.
Dec 08 09:39:58 compute-1 sshd-session[44982]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:39:59 compute-1 python3.9[45135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:40:00 compute-1 sudo[45289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbnhwwsxftswneoajxartjcmvkjryhcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186800.3001916-69-120661631345208/AnsiballZ_getent.py'
Dec 08 09:40:00 compute-1 sudo[45289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:00 compute-1 python3.9[45291]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 08 09:40:00 compute-1 sudo[45289]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:01 compute-1 sudo[45442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijbdjacxqwoapwesohvgezybrsnmmfkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186801.220602-95-136121668605217/AnsiballZ_group.py'
Dec 08 09:40:01 compute-1 sudo[45442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:01 compute-1 python3.9[45444]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 08 09:40:01 compute-1 groupadd[45445]: group added to /etc/group: name=openvswitch, GID=42476
Dec 08 09:40:01 compute-1 groupadd[45445]: group added to /etc/gshadow: name=openvswitch
Dec 08 09:40:01 compute-1 groupadd[45445]: new group: name=openvswitch, GID=42476
Dec 08 09:40:01 compute-1 sudo[45442]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:02 compute-1 sudo[45600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmycqwrivtutteblshlhnqhhxbphorxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186801.9905493-117-270563117472914/AnsiballZ_user.py'
Dec 08 09:40:02 compute-1 sudo[45600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:02 compute-1 python3.9[45602]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 08 09:40:03 compute-1 useradd[45604]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 08 09:40:03 compute-1 useradd[45604]: add 'openvswitch' to group 'hugetlbfs'
Dec 08 09:40:03 compute-1 useradd[45604]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 08 09:40:03 compute-1 sudo[45600]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:03 compute-1 sudo[45760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efqqkymxgglppxqdgorvgixvlpgydreo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186803.5536473-147-97243799307479/AnsiballZ_setup.py'
Dec 08 09:40:03 compute-1 sudo[45760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:04 compute-1 python3.9[45762]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 08 09:40:04 compute-1 sudo[45760]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:05 compute-1 sudo[45844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krqubhmvdeocbuiplqikzfhhsiebemhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186803.5536473-147-97243799307479/AnsiballZ_dnf.py'
Dec 08 09:40:05 compute-1 sudo[45844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:05 compute-1 python3.9[45846]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 08 09:40:07 compute-1 sudo[45844]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:08 compute-1 sudo[46007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnifpxrkseswkxuctvvshvemydzozoby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186808.3553984-189-164663146570227/AnsiballZ_dnf.py'
Dec 08 09:40:08 compute-1 sudo[46007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:08 compute-1 python3.9[46009]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:40:17 compute-1 sshd-session[46024]: Received disconnect from 79.32.212.213 port 33366:11: Bye Bye [preauth]
Dec 08 09:40:17 compute-1 sshd-session[46024]: Disconnected from authenticating user root 79.32.212.213 port 33366 [preauth]
Dec 08 09:40:20 compute-1 kernel: SELinux:  Converting 2730 SID table entries...
Dec 08 09:40:20 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 08 09:40:20 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 08 09:40:20 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 08 09:40:20 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 08 09:40:20 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 08 09:40:20 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 08 09:40:20 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 08 09:40:20 compute-1 groupadd[46034]: group added to /etc/group: name=unbound, GID=993
Dec 08 09:40:20 compute-1 groupadd[46034]: group added to /etc/gshadow: name=unbound
Dec 08 09:40:20 compute-1 groupadd[46034]: new group: name=unbound, GID=993
Dec 08 09:40:20 compute-1 useradd[46041]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 08 09:40:20 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 08 09:40:20 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 08 09:40:21 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 08 09:40:21 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 08 09:40:21 compute-1 systemd[1]: Reloading.
Dec 08 09:40:21 compute-1 systemd-sysv-generator[46542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:40:21 compute-1 systemd-rc-local-generator[46539]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:40:22 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 08 09:40:22 compute-1 sudo[46007]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:22 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 08 09:40:22 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 08 09:40:22 compute-1 systemd[1]: run-r5df7fdb330714e4a8ac6555263f1858a.service: Deactivated successfully.
Dec 08 09:40:24 compute-1 sudo[47106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qglakgsolyytsptcpaakdjptlqjlsqeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186823.5852308-213-238363573871331/AnsiballZ_systemd.py'
Dec 08 09:40:24 compute-1 sudo[47106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:24 compute-1 python3.9[47108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 08 09:40:24 compute-1 systemd[1]: Reloading.
Dec 08 09:40:24 compute-1 systemd-rc-local-generator[47137]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:40:24 compute-1 systemd-sysv-generator[47141]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:40:24 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Dec 08 09:40:24 compute-1 chown[47149]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 08 09:40:24 compute-1 ovs-ctl[47155]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 08 09:40:24 compute-1 ovs-ctl[47155]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 08 09:40:24 compute-1 ovs-ctl[47155]: Starting ovsdb-server [  OK  ]
Dec 08 09:40:25 compute-1 ovs-vsctl[47204]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 08 09:40:25 compute-1 ovs-vsctl[47220]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"2ff91e45-78ee-48e0-a0cc-897250947fa4\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 08 09:40:25 compute-1 ovs-ctl[47155]: Configuring Open vSwitch system IDs [  OK  ]
Dec 08 09:40:25 compute-1 ovs-ctl[47155]: Enabling remote OVSDB managers [  OK  ]
Dec 08 09:40:25 compute-1 ovs-vsctl[47229]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Dec 08 09:40:25 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Dec 08 09:40:25 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 08 09:40:25 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 08 09:40:25 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 08 09:40:25 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Dec 08 09:40:25 compute-1 ovs-ctl[47273]: Inserting openvswitch module [  OK  ]
Dec 08 09:40:25 compute-1 ovs-ctl[47242]: Starting ovs-vswitchd [  OK  ]
Dec 08 09:40:25 compute-1 ovs-ctl[47242]: Enabling remote OVSDB managers [  OK  ]
Dec 08 09:40:25 compute-1 ovs-vsctl[47291]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Dec 08 09:40:25 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 08 09:40:25 compute-1 systemd[1]: Starting Open vSwitch...
Dec 08 09:40:25 compute-1 systemd[1]: Finished Open vSwitch.
Dec 08 09:40:25 compute-1 sudo[47106]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:26 compute-1 python3.9[47442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:40:27 compute-1 sudo[47592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhzjmljnpjjpkauuueyhespztxunqcec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186826.6672626-267-151720099916685/AnsiballZ_sefcontext.py'
Dec 08 09:40:27 compute-1 sudo[47592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:27 compute-1 python3.9[47594]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 08 09:40:28 compute-1 kernel: SELinux:  Converting 2744 SID table entries...
Dec 08 09:40:28 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 08 09:40:28 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 08 09:40:28 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 08 09:40:28 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 08 09:40:28 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 08 09:40:28 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 08 09:40:28 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 08 09:40:28 compute-1 sudo[47592]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:29 compute-1 python3.9[47749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:40:30 compute-1 sudo[47905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plbtqgequrlpbnqrfhjhwoncfxamnvfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186830.2137792-321-31909204832652/AnsiballZ_dnf.py'
Dec 08 09:40:30 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 08 09:40:30 compute-1 sudo[47905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:30 compute-1 python3.9[47907]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:40:31 compute-1 sudo[47905]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:33 compute-1 sudo[48058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asldqdcmvnwlgnrcsflvmyhxygkodcpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186832.7477064-345-234601126163476/AnsiballZ_command.py'
Dec 08 09:40:33 compute-1 sudo[48058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:33 compute-1 python3.9[48060]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:40:34 compute-1 sudo[48058]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:34 compute-1 sudo[48345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orlfhfjaztayabbwgcjymenojzfnuafk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186834.310028-369-238095692828417/AnsiballZ_file.py'
Dec 08 09:40:34 compute-1 sudo[48345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:34 compute-1 python3.9[48347]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 08 09:40:34 compute-1 sudo[48345]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:35 compute-1 python3.9[48499]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:40:36 compute-1 sshd-session[48348]: Received disconnect from 95.128.196.223 port 36676:11: Bye Bye [preauth]
Dec 08 09:40:36 compute-1 sshd-session[48348]: Disconnected from authenticating user root 95.128.196.223 port 36676 [preauth]
Dec 08 09:40:36 compute-1 sudo[48651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmchysixmdalvbrknbksoqcilbmrvnzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186835.958512-417-131205734772299/AnsiballZ_dnf.py'
Dec 08 09:40:36 compute-1 sudo[48651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:36 compute-1 python3.9[48653]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:40:38 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 08 09:40:38 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 08 09:40:38 compute-1 systemd[1]: Reloading.
Dec 08 09:40:38 compute-1 systemd-sysv-generator[48698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:40:38 compute-1 systemd-rc-local-generator[48695]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:40:38 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 08 09:40:38 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 08 09:40:38 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 08 09:40:38 compute-1 systemd[1]: run-r7997cd7a8c1b4c61b8d6b20a5f8ccf47.service: Deactivated successfully.
Dec 08 09:40:38 compute-1 sudo[48651]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:39 compute-1 sudo[48969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkyjmellyyepywvxjxwkjwerjmzgriuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186839.3112202-441-235941013404206/AnsiballZ_systemd.py'
Dec 08 09:40:39 compute-1 sudo[48969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:40 compute-1 python3.9[48971]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 08 09:40:40 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 08 09:40:40 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Dec 08 09:40:40 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Dec 08 09:40:40 compute-1 NetworkManager[7192]: <info>  [1765186840.0855] caught SIGTERM, shutting down normally.
Dec 08 09:40:40 compute-1 systemd[1]: Stopping Network Manager...
Dec 08 09:40:40 compute-1 NetworkManager[7192]: <info>  [1765186840.0884] dhcp4 (eth0): canceled DHCP transaction
Dec 08 09:40:40 compute-1 NetworkManager[7192]: <info>  [1765186840.0884] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:40:40 compute-1 NetworkManager[7192]: <info>  [1765186840.0884] dhcp4 (eth0): state changed no lease
Dec 08 09:40:40 compute-1 NetworkManager[7192]: <info>  [1765186840.0889] manager: NetworkManager state is now CONNECTED_SITE
Dec 08 09:40:40 compute-1 NetworkManager[7192]: <info>  [1765186840.1000] exiting (success)
Dec 08 09:40:40 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 08 09:40:40 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 08 09:40:40 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 08 09:40:40 compute-1 systemd[1]: Stopped Network Manager.
Dec 08 09:40:40 compute-1 systemd[1]: NetworkManager.service: Consumed 12.728s CPU time, 4.1M memory peak, read 0B from disk, written 13.5K to disk.
Dec 08 09:40:40 compute-1 systemd[1]: Starting Network Manager...
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.1766] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:f484d2fb-c260-4eb2-82b3-d1ac68e69214)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.1769] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.1839] manager[0x5588c4c9c090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 08 09:40:40 compute-1 systemd[1]: Starting Hostname Service...
Dec 08 09:40:40 compute-1 systemd[1]: Started Hostname Service.
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2677] hostname: hostname: using hostnamed
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2681] hostname: static hostname changed from (none) to "compute-1"
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2688] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2695] manager[0x5588c4c9c090]: rfkill: Wi-Fi hardware radio set enabled
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2696] manager[0x5588c4c9c090]: rfkill: WWAN hardware radio set enabled
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2731] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2745] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2746] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2747] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2747] manager: Networking is enabled by state file
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2750] settings: Loaded settings plugin: keyfile (internal)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2755] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2786] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2796] dhcp: init: Using DHCP client 'internal'
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2799] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2804] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2811] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2820] device (lo): Activation: starting connection 'lo' (ce717743-79fb-4648-bac2-02cc511629a9)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2827] device (eth0): carrier: link connected
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2832] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2836] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2836] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2842] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2848] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2855] device (eth1): carrier: link connected
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2860] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2868] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ee83eef6-1e86-5eac-a45d-fd4a9023a665) (indicated)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2870] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2876] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2884] device (eth1): Activation: starting connection 'ci-private-network' (ee83eef6-1e86-5eac-a45d-fd4a9023a665)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2892] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 08 09:40:40 compute-1 systemd[1]: Started Network Manager.
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2903] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2906] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2908] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2911] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2914] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2917] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2920] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2924] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2932] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2936] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2948] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2968] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2977] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2979] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.2985] device (lo): Activation: successful, device activated.
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3243] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3253] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 08 09:40:40 compute-1 systemd[1]: Starting Network Manager Wait Online...
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3341] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3351] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3354] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3358] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3364] device (eth1): Activation: successful, device activated.
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3374] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3376] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3380] manager: NetworkManager state is now CONNECTED_SITE
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3384] device (eth0): Activation: successful, device activated.
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3392] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 08 09:40:40 compute-1 NetworkManager[48984]: <info>  [1765186840.3410] manager: startup complete
Dec 08 09:40:40 compute-1 systemd[1]: Finished Network Manager Wait Online.
Dec 08 09:40:40 compute-1 sudo[48969]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:40 compute-1 sudo[49195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwcbampqfwsyqjnoiolydsauhevudgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186840.5409226-465-63457097194177/AnsiballZ_dnf.py'
Dec 08 09:40:40 compute-1 sudo[49195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:41 compute-1 python3.9[49197]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:40:41 compute-1 sshd[1006]: Timeout before authentication for connection from 120.48.123.76 to 38.102.83.181, pid = 34294
Dec 08 09:40:45 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 08 09:40:45 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 08 09:40:45 compute-1 systemd[1]: Reloading.
Dec 08 09:40:45 compute-1 systemd-rc-local-generator[49251]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:40:45 compute-1 systemd-sysv-generator[49254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:40:45 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 08 09:40:46 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 08 09:40:46 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 08 09:40:46 compute-1 systemd[1]: run-r203e6af28a784835a6c11a5c163d0da9.service: Deactivated successfully.
Dec 08 09:40:46 compute-1 sudo[49195]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:47 compute-1 sudo[49654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oinxxbwcgfgikipugikqofmyvtvszcpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186847.5939226-501-52749578230298/AnsiballZ_stat.py'
Dec 08 09:40:47 compute-1 sudo[49654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:48 compute-1 python3.9[49656]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:40:48 compute-1 sudo[49654]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:48 compute-1 sudo[49806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbgatqrhnyyvfxzaxfnwwzskdperfcvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186848.334481-528-115723135740175/AnsiballZ_ini_file.py'
Dec 08 09:40:48 compute-1 sudo[49806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:48 compute-1 python3.9[49808]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:40:48 compute-1 sudo[49806]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:49 compute-1 sudo[49960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emclsoaqlevsyctemqtmqjzmjwmmuqzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186849.3043344-558-107875916023129/AnsiballZ_ini_file.py'
Dec 08 09:40:49 compute-1 sudo[49960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:49 compute-1 python3.9[49962]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:40:49 compute-1 sudo[49960]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:50 compute-1 sudo[50112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hinhsguqyxxfelfyiwanezxqajoegrlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186849.9338217-558-276320016269388/AnsiballZ_ini_file.py'
Dec 08 09:40:50 compute-1 sudo[50112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:50 compute-1 python3.9[50114]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:40:50 compute-1 sudo[50112]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:50 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 08 09:40:51 compute-1 sudo[50264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhyqfbkccavdwhcwlxiihmgymdrhdjxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186850.6901615-603-107158839139391/AnsiballZ_ini_file.py'
Dec 08 09:40:51 compute-1 sudo[50264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:51 compute-1 python3.9[50266]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:40:51 compute-1 sudo[50264]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:51 compute-1 sudo[50416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yncvkdsgfmuojrittacbegfwyhfjdclb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186851.4770937-603-97918767764859/AnsiballZ_ini_file.py'
Dec 08 09:40:51 compute-1 sudo[50416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:51 compute-1 python3.9[50418]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:40:51 compute-1 sudo[50416]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:52 compute-1 sudo[50568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grijiwkupkaavyhpgayezkasfomuilhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186852.2269332-648-274721215164038/AnsiballZ_stat.py'
Dec 08 09:40:52 compute-1 sudo[50568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:52 compute-1 python3.9[50570]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:40:52 compute-1 sudo[50568]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:53 compute-1 sudo[50691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwalidmxzvpdgcbvfsfergrtxcruvsbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186852.2269332-648-274721215164038/AnsiballZ_copy.py'
Dec 08 09:40:53 compute-1 sudo[50691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:53 compute-1 python3.9[50693]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186852.2269332-648-274721215164038/.source _original_basename=.p2yy06c8 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:40:53 compute-1 sudo[50691]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:53 compute-1 sudo[50843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veapsahwrsbvksonmarxafzwhkjgslik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186853.607812-693-72902654929234/AnsiballZ_file.py'
Dec 08 09:40:53 compute-1 sudo[50843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:54 compute-1 python3.9[50845]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:40:54 compute-1 sudo[50843]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:54 compute-1 sudo[50995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itsaesyrcxhcxdiobouzmfuwqgjvdktt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186854.2901225-717-143415205146445/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 08 09:40:54 compute-1 sudo[50995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:54 compute-1 python3.9[50997]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 08 09:40:54 compute-1 sudo[50995]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:55 compute-1 sudo[51147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuoyrgjfiwvniuncuogaabtaehdfvasx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186855.2001164-744-226856378544904/AnsiballZ_file.py'
Dec 08 09:40:55 compute-1 sudo[51147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:55 compute-1 python3.9[51149]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:40:55 compute-1 sudo[51147]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:56 compute-1 sudo[51299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcuoldwipqaxzyhoihlliqexxistivde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186856.2048664-774-25379377881466/AnsiballZ_stat.py'
Dec 08 09:40:56 compute-1 sudo[51299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:56 compute-1 sudo[51299]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:57 compute-1 sudo[51422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikxqhpvadekgmhssusdxvdqnwxwonchz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186856.2048664-774-25379377881466/AnsiballZ_copy.py'
Dec 08 09:40:57 compute-1 sudo[51422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:57 compute-1 sudo[51422]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:57 compute-1 sudo[51574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziptczffyxfpcztydkmvnjmrbiaduwsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186857.5135705-819-130316733100942/AnsiballZ_slurp.py'
Dec 08 09:40:57 compute-1 sudo[51574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:58 compute-1 python3.9[51576]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 08 09:40:58 compute-1 sudo[51574]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:59 compute-1 sudo[51749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwfxyazxbebeaieervnpeprjfxgkhtxv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186858.4653404-846-37219973166651/async_wrapper.py j76364423395 300 /home/zuul/.ansible/tmp/ansible-tmp-1765186858.4653404-846-37219973166651/AnsiballZ_edpm_os_net_config.py _'
Dec 08 09:40:59 compute-1 sudo[51749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:40:59 compute-1 ansible-async_wrapper.py[51751]: Invoked with j76364423395 300 /home/zuul/.ansible/tmp/ansible-tmp-1765186858.4653404-846-37219973166651/AnsiballZ_edpm_os_net_config.py _
Dec 08 09:40:59 compute-1 ansible-async_wrapper.py[51754]: Starting module and watcher
Dec 08 09:40:59 compute-1 ansible-async_wrapper.py[51754]: Start watching 51755 (300)
Dec 08 09:40:59 compute-1 ansible-async_wrapper.py[51755]: Start module (51755)
Dec 08 09:40:59 compute-1 ansible-async_wrapper.py[51751]: Return async_wrapper task started.
Dec 08 09:40:59 compute-1 sudo[51749]: pam_unix(sudo:session): session closed for user root
Dec 08 09:40:59 compute-1 python3.9[51756]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 08 09:41:00 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 08 09:41:00 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 08 09:41:00 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 08 09:41:00 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 08 09:41:00 compute-1 kernel: cfg80211: failed to load regulatory.db
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.2991] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3006] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3494] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3496] audit: op="connection-add" uuid="8f26e55f-8903-4917-963b-1377db74aed8" name="br-ex-br" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3510] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3511] audit: op="connection-add" uuid="c0cb4ebb-75ab-4441-b70c-996d43103c29" name="br-ex-port" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3524] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3525] audit: op="connection-add" uuid="d1f0ee62-67ac-42c9-b1ed-cfa9349dc548" name="eth1-port" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3535] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3537] audit: op="connection-add" uuid="fc9fd669-5164-4bbc-bbbb-5c3134b203c9" name="vlan20-port" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3547] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3548] audit: op="connection-add" uuid="54fce837-6699-4d7c-8817-273c982b99fa" name="vlan21-port" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3558] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3560] audit: op="connection-add" uuid="e31fb62e-6bf8-4e72-991a-b89b967845dd" name="vlan22-port" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3570] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3571] audit: op="connection-add" uuid="3a607e17-1289-4b38-b609-869e855c755f" name="vlan23-port" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3590] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.timestamp,connection.autoconnect-priority,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3603] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3605] audit: op="connection-add" uuid="957b1580-8c4a-45b4-8d12-f13b0462829d" name="br-ex-if" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3650] audit: op="connection-update" uuid="ee83eef6-1e86-5eac-a45d-fd4a9023a665" name="ci-private-network" args="ovs-interface.type,ipv4.routing-rules,ipv4.never-default,ipv4.routes,ipv4.dns,ipv4.addresses,ipv4.method,ovs-external-ids.data,connection.controller,connection.slave-type,connection.master,connection.port-type,connection.timestamp,ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3667] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3668] audit: op="connection-add" uuid="903b28da-04f0-421c-94bf-0917b8b24f1b" name="vlan20-if" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3682] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3683] audit: op="connection-add" uuid="2b69ee3b-8061-41c6-9f21-042b53b1cc74" name="vlan21-if" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3699] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3700] audit: op="connection-add" uuid="12269057-77c6-465f-a758-d36829eeb36f" name="vlan22-if" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3714] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3715] audit: op="connection-add" uuid="7e5731a9-efdd-4040-8697-051ebdcf34bb" name="vlan23-if" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3727] audit: op="connection-delete" uuid="703c6305-83ed-3018-8231-3481d3d9a233" name="Wired connection 1" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3743] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3754] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3760] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (8f26e55f-8903-4917-963b-1377db74aed8)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3760] audit: op="connection-activate" uuid="8f26e55f-8903-4917-963b-1377db74aed8" name="br-ex-br" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3762] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3768] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3771] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (c0cb4ebb-75ab-4441-b70c-996d43103c29)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3772] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3776] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3780] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d1f0ee62-67ac-42c9-b1ed-cfa9349dc548)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3781] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3785] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3788] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (fc9fd669-5164-4bbc-bbbb-5c3134b203c9)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3790] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3794] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3797] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (54fce837-6699-4d7c-8817-273c982b99fa)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3799] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3804] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3808] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (e31fb62e-6bf8-4e72-991a-b89b967845dd)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3809] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3813] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3816] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (3a607e17-1289-4b38-b609-869e855c755f)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3816] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3818] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3820] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3827] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3831] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3835] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (957b1580-8c4a-45b4-8d12-f13b0462829d)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3836] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3839] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3840] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3842] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3843] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3853] device (eth1): disconnecting for new activation request.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3854] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3857] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3858] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3860] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3862] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3866] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3870] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (903b28da-04f0-421c-94bf-0917b8b24f1b)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3872] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3875] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3876] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3877] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3880] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3885] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3889] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (2b69ee3b-8061-41c6-9f21-042b53b1cc74)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3890] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3892] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3894] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3895] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3898] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3902] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3906] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (12269057-77c6-465f-a758-d36829eeb36f)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3907] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3910] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3912] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3913] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3916] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3920] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3925] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7e5731a9-efdd-4040-8697-051ebdcf34bb)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3926] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3929] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3931] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3932] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3933] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3944] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3946] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3950] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3951] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3957] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3961] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3964] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3967] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3968] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3973] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3977] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3980] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3982] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3987] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 kernel: ovs-system: entered promiscuous mode
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3990] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3993] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3994] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.3999] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4003] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4005] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4007] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 systemd-udevd[51763]: Network interface NamePolicy= disabled on kernel command line.
Dec 08 09:41:01 compute-1 kernel: Timeout policy base is empty
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4012] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4015] dhcp4 (eth0): canceled DHCP transaction
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4016] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4016] dhcp4 (eth0): state changed no lease
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4018] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4030] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4033] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51757 uid=0 result="fail" reason="Device is not activated"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4039] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4070] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4074] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4077] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 08 09:41:01 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4149] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4156] device (eth1): disconnecting for new activation request.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4157] audit: op="connection-activate" uuid="ee83eef6-1e86-5eac-a45d-fd4a9023a665" name="ci-private-network" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4157] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4236] device (eth1): Activation: starting connection 'ci-private-network' (ee83eef6-1e86-5eac-a45d-fd4a9023a665)
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4240] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4252] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4254] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4261] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4264] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4267] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4268] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4269] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4270] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4271] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4272] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4273] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4274] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4279] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4281] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4284] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4287] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4289] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4292] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4294] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4296] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4298] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4301] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4303] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4305] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 kernel: br-ex: entered promiscuous mode
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4309] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4312] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 08 09:41:01 compute-1 kernel: vlan22: entered promiscuous mode
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4395] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4397] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 systemd-udevd[51762]: Network interface NamePolicy= disabled on kernel command line.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4401] device (eth1): Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4410] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4418] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4442] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4443] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4447] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 kernel: vlan20: entered promiscuous mode
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4520] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4527] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 kernel: vlan21: entered promiscuous mode
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4542] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4543] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4547] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4581] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4589] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 kernel: vlan23: entered promiscuous mode
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4623] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4626] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4630] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4637] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4647] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4677] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4678] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4683] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4711] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4721] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4739] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4740] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 08 09:41:01 compute-1 NetworkManager[48984]: <info>  [1765186861.4744] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 08 09:41:02 compute-1 NetworkManager[48984]: <info>  [1765186862.5827] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Dec 08 09:41:02 compute-1 NetworkManager[48984]: <info>  [1765186862.7614] checkpoint[0x5588c4c72950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 08 09:41:02 compute-1 NetworkManager[48984]: <info>  [1765186862.7616] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51757 uid=0 result="success"
Dec 08 09:41:02 compute-1 sudo[52113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akwhakkybqlqhlxbodulllusxryqrryb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186862.4175198-846-190465146566565/AnsiballZ_async_status.py'
Dec 08 09:41:02 compute-1 sudo[52113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:03 compute-1 python3.9[52115]: ansible-ansible.legacy.async_status Invoked with jid=j76364423395.51751 mode=status _async_dir=/root/.ansible_async
Dec 08 09:41:03 compute-1 sudo[52113]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:03 compute-1 NetworkManager[48984]: <info>  [1765186863.2281] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51757 uid=0 result="success"
Dec 08 09:41:03 compute-1 NetworkManager[48984]: <info>  [1765186863.2297] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51757 uid=0 result="success"
Dec 08 09:41:03 compute-1 NetworkManager[48984]: <info>  [1765186863.4769] audit: op="networking-control" arg="global-dns-configuration" pid=51757 uid=0 result="success"
Dec 08 09:41:03 compute-1 NetworkManager[48984]: <info>  [1765186863.4804] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 08 09:41:03 compute-1 NetworkManager[48984]: <info>  [1765186863.4835] audit: op="networking-control" arg="global-dns-configuration" pid=51757 uid=0 result="success"
Dec 08 09:41:03 compute-1 NetworkManager[48984]: <info>  [1765186863.5362] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51757 uid=0 result="success"
Dec 08 09:41:03 compute-1 NetworkManager[48984]: <info>  [1765186863.7068] checkpoint[0x5588c4c72a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 08 09:41:03 compute-1 NetworkManager[48984]: <info>  [1765186863.7072] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51757 uid=0 result="success"
Dec 08 09:41:03 compute-1 ansible-async_wrapper.py[51755]: Module complete (51755)
Dec 08 09:41:04 compute-1 ansible-async_wrapper.py[51754]: Done in kid B.
Dec 08 09:41:06 compute-1 sudo[52218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbgyglircjhunhpdtxrogqtoffyojqtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186862.4175198-846-190465146566565/AnsiballZ_async_status.py'
Dec 08 09:41:06 compute-1 sudo[52218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:06 compute-1 python3.9[52220]: ansible-ansible.legacy.async_status Invoked with jid=j76364423395.51751 mode=status _async_dir=/root/.ansible_async
Dec 08 09:41:06 compute-1 sudo[52218]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:06 compute-1 sudo[52318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yknagyygmfmekefwcoaqionicrysqqwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186862.4175198-846-190465146566565/AnsiballZ_async_status.py'
Dec 08 09:41:06 compute-1 sudo[52318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:07 compute-1 python3.9[52320]: ansible-ansible.legacy.async_status Invoked with jid=j76364423395.51751 mode=cleanup _async_dir=/root/.ansible_async
Dec 08 09:41:07 compute-1 sudo[52318]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:07 compute-1 sudo[52470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kacjicqsqysxyzylcijwygczpqebtlrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186867.4098647-927-19208512642067/AnsiballZ_stat.py'
Dec 08 09:41:07 compute-1 sudo[52470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:07 compute-1 python3.9[52472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:41:07 compute-1 sudo[52470]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:08 compute-1 sudo[52593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mypyklxgmttmstssrypiihmybbjueigg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186867.4098647-927-19208512642067/AnsiballZ_copy.py'
Dec 08 09:41:08 compute-1 sudo[52593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:08 compute-1 python3.9[52595]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186867.4098647-927-19208512642067/.source.returncode _original_basename=.ovf_ek_f follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:41:08 compute-1 sudo[52593]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:09 compute-1 sudo[52747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgzqmimeenqrhxlszomibopgimrdnafc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186868.8497248-975-202042954563014/AnsiballZ_stat.py'
Dec 08 09:41:09 compute-1 sudo[52747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:09 compute-1 python3.9[52749]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:41:09 compute-1 sudo[52747]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:09 compute-1 sudo[52870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umavydfcclqotyqeffbnyvybfrfxrpaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186868.8497248-975-202042954563014/AnsiballZ_copy.py'
Dec 08 09:41:09 compute-1 sudo[52870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:09 compute-1 python3.9[52872]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186868.8497248-975-202042954563014/.source.cfg _original_basename=.u1quy_3h follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:41:09 compute-1 sudo[52870]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:10 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 08 09:41:10 compute-1 sshd-session[52619]: Received disconnect from 103.191.92.236 port 35156:11: Bye Bye [preauth]
Dec 08 09:41:10 compute-1 sshd-session[52619]: Disconnected from authenticating user root 103.191.92.236 port 35156 [preauth]
Dec 08 09:41:10 compute-1 sudo[53026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpxvlgqogvcuaprhdrnreyzbeoplyyav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186870.1791892-1020-37270444184232/AnsiballZ_systemd.py'
Dec 08 09:41:10 compute-1 sudo[53026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:10 compute-1 python3.9[53028]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 08 09:41:10 compute-1 systemd[1]: Reloading Network Manager...
Dec 08 09:41:10 compute-1 NetworkManager[48984]: <info>  [1765186870.9238] audit: op="reload" arg="0" pid=53032 uid=0 result="success"
Dec 08 09:41:10 compute-1 NetworkManager[48984]: <info>  [1765186870.9247] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 08 09:41:10 compute-1 systemd[1]: Reloaded Network Manager.
Dec 08 09:41:10 compute-1 sudo[53026]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:11 compute-1 sshd-session[44985]: Connection closed by 192.168.122.30 port 57058
Dec 08 09:41:11 compute-1 sshd-session[44982]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:41:11 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Dec 08 09:41:11 compute-1 systemd[1]: session-11.scope: Consumed 51.449s CPU time.
Dec 08 09:41:11 compute-1 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Dec 08 09:41:11 compute-1 systemd-logind[795]: Removed session 11.
Dec 08 09:41:16 compute-1 sshd-session[53062]: Accepted publickey for zuul from 192.168.122.30 port 49692 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:41:16 compute-1 systemd-logind[795]: New session 12 of user zuul.
Dec 08 09:41:16 compute-1 systemd[1]: Started Session 12 of User zuul.
Dec 08 09:41:16 compute-1 sshd-session[53062]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:41:17 compute-1 python3.9[53216]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:41:18 compute-1 python3.9[53370]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 08 09:41:19 compute-1 python3.9[53563]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:41:20 compute-1 sshd-session[53065]: Connection closed by 192.168.122.30 port 49692
Dec 08 09:41:20 compute-1 sshd-session[53062]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:41:20 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Dec 08 09:41:20 compute-1 systemd[1]: session-12.scope: Consumed 2.524s CPU time.
Dec 08 09:41:20 compute-1 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Dec 08 09:41:20 compute-1 systemd-logind[795]: Removed session 12.
Dec 08 09:41:20 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 08 09:41:25 compute-1 sshd-session[53592]: Accepted publickey for zuul from 192.168.122.30 port 33912 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:41:25 compute-1 systemd-logind[795]: New session 13 of user zuul.
Dec 08 09:41:25 compute-1 systemd[1]: Started Session 13 of User zuul.
Dec 08 09:41:25 compute-1 sshd-session[53592]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:41:26 compute-1 python3.9[53746]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:41:27 compute-1 python3.9[53900]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:41:28 compute-1 sudo[54054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klnpkrqvwofifambqgutpuqdgagmfxhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186887.829231-81-249582061772105/AnsiballZ_setup.py'
Dec 08 09:41:28 compute-1 sudo[54054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:28 compute-1 python3.9[54056]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 08 09:41:28 compute-1 sudo[54054]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:29 compute-1 sudo[54139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akjpgqpnocwunsmfneyvpuhfkixfsjbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186887.829231-81-249582061772105/AnsiballZ_dnf.py'
Dec 08 09:41:29 compute-1 sudo[54139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:29 compute-1 python3.9[54141]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:41:30 compute-1 sudo[54139]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:31 compute-1 sudo[54292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niysedghuvewziaikjjrrvvotykninsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186890.8356705-117-274868157639621/AnsiballZ_setup.py'
Dec 08 09:41:31 compute-1 sudo[54292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:31 compute-1 python3.9[54294]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 08 09:41:32 compute-1 sudo[54292]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:33 compute-1 sudo[54488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psnkozcbfxxksnyihxaoglktbsfcqdqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186893.1249459-150-127070926342476/AnsiballZ_file.py'
Dec 08 09:41:33 compute-1 sudo[54488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:33 compute-1 python3.9[54490]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:41:33 compute-1 sudo[54488]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:34 compute-1 sudo[54642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cskdsoskjrzlzntlrqtmddkcqeqpyrgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186894.1835065-174-82342430281458/AnsiballZ_command.py'
Dec 08 09:41:34 compute-1 sudo[54642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:34 compute-1 sshd-session[54522]: Received disconnect from 79.32.212.213 port 37392:11: Bye Bye [preauth]
Dec 08 09:41:34 compute-1 sshd-session[54522]: Disconnected from authenticating user root 79.32.212.213 port 37392 [preauth]
Dec 08 09:41:34 compute-1 python3.9[54644]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:41:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3093762559-merged.mount: Deactivated successfully.
Dec 08 09:41:34 compute-1 podman[54645]: 2025-12-08 09:41:34.92011856 +0000 UTC m=+0.068211822 system refresh
Dec 08 09:41:34 compute-1 sudo[54642]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:35 compute-1 sudo[54805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoisvdpguzojruaatcklxmscrfgoeamx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186895.180054-198-21423308965072/AnsiballZ_stat.py'
Dec 08 09:41:35 compute-1 sudo[54805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:35 compute-1 python3.9[54807]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:41:35 compute-1 sudo[54805]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:35 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:41:36 compute-1 sudo[54928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikqfqedvqzdvuntkchjyghhpznfqive ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186895.180054-198-21423308965072/AnsiballZ_copy.py'
Dec 08 09:41:36 compute-1 sudo[54928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:36 compute-1 python3.9[54930]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186895.180054-198-21423308965072/.source.json follow=False _original_basename=podman_network_config.j2 checksum=cf740c21b7f0278b62991fb710e91c170c93770c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:41:36 compute-1 sudo[54928]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:37 compute-1 sudo[55080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sohotymlxqdronthdjlrtihjbwvtmtna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186896.6835582-243-40917750685689/AnsiballZ_stat.py'
Dec 08 09:41:37 compute-1 sudo[55080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:37 compute-1 python3.9[55082]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:41:37 compute-1 sudo[55080]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:37 compute-1 sshd[1006]: Timeout before authentication for connection from 180.76.105.69 to 38.102.83.181, pid = 41075
Dec 08 09:41:37 compute-1 sudo[55203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqtpgjertdbgpkqgccgmizrghsvcitkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186896.6835582-243-40917750685689/AnsiballZ_copy.py'
Dec 08 09:41:37 compute-1 sudo[55203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:37 compute-1 python3.9[55205]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186896.6835582-243-40917750685689/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c2a85b7389d30a5066b1ae0058c9a8ae1bc25688 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:41:37 compute-1 sudo[55203]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:38 compute-1 sudo[55355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urryeinfcwzfzwwhbapmkqvnqzrstlia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186898.156468-291-55931521932447/AnsiballZ_ini_file.py'
Dec 08 09:41:38 compute-1 sudo[55355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:38 compute-1 python3.9[55357]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:41:38 compute-1 sudo[55355]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:39 compute-1 sudo[55507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owkutdydqbbmozqhrkzyegfukiwzjmdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186899.0313077-291-227761220809818/AnsiballZ_ini_file.py'
Dec 08 09:41:39 compute-1 sudo[55507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:39 compute-1 python3.9[55509]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:41:39 compute-1 sudo[55507]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:39 compute-1 sudo[55659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuwarkjguhyumdcrifgywcuerbsqknvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186899.6508825-291-63913106280814/AnsiballZ_ini_file.py'
Dec 08 09:41:39 compute-1 sudo[55659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:40 compute-1 python3.9[55661]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:41:40 compute-1 sudo[55659]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:40 compute-1 sudo[55811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgbqksolyboezzzfhmriwqqzynlspcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186900.324697-291-122317625768910/AnsiballZ_ini_file.py'
Dec 08 09:41:40 compute-1 sudo[55811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:40 compute-1 python3.9[55813]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:41:40 compute-1 sudo[55811]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:41 compute-1 sudo[55963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxwwtyiywooypjjheshqoeezkuvyjqie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186901.1461124-384-67506841105040/AnsiballZ_dnf.py'
Dec 08 09:41:41 compute-1 sudo[55963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:41 compute-1 python3.9[55965]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:41:42 compute-1 sudo[55963]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:43 compute-1 sudo[56116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzavjqdqspgcbkbvvoyzzczcudltovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186903.4854681-417-2453591275960/AnsiballZ_setup.py'
Dec 08 09:41:43 compute-1 sudo[56116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:44 compute-1 python3.9[56118]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:41:44 compute-1 sudo[56116]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:44 compute-1 sudo[56270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwogffbbxjarkthxnzhcqbuojgmnswbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186904.31737-441-130625083190954/AnsiballZ_stat.py'
Dec 08 09:41:44 compute-1 sudo[56270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:44 compute-1 python3.9[56272]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:41:44 compute-1 sudo[56270]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:45 compute-1 sudo[56422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlagdyhbuaqltftbueruxdcsygdjziab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186905.111646-468-47052168406293/AnsiballZ_stat.py'
Dec 08 09:41:45 compute-1 sudo[56422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:45 compute-1 python3.9[56424]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:41:45 compute-1 sudo[56422]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:46 compute-1 sudo[56574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uadybxfjsenwxwykzmttxlwojrlryfrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186905.9405255-498-168842050113866/AnsiballZ_command.py'
Dec 08 09:41:46 compute-1 sudo[56574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:46 compute-1 python3.9[56576]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:41:46 compute-1 sudo[56574]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:47 compute-1 sudo[56727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylhpkmcxdtokummbhvntuthxmiuzwshk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186906.733396-528-205585354921154/AnsiballZ_service_facts.py'
Dec 08 09:41:47 compute-1 sudo[56727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:47 compute-1 python3.9[56729]: ansible-service_facts Invoked
Dec 08 09:41:47 compute-1 network[56746]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 08 09:41:47 compute-1 network[56747]: 'network-scripts' will be removed from distribution in near future.
Dec 08 09:41:47 compute-1 network[56748]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 08 09:41:51 compute-1 sudo[56727]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:52 compute-1 sudo[57031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpfpxqrwsmfjbbqmnckefdkhqguiyisp ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765186912.121375-573-110652318148054/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765186912.121375-573-110652318148054/args'
Dec 08 09:41:52 compute-1 sudo[57031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:52 compute-1 sudo[57031]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:53 compute-1 sudo[57198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mebxlqmyauiyydwrzxpgffltyofwmfel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186912.8420181-606-280377186889978/AnsiballZ_dnf.py'
Dec 08 09:41:53 compute-1 sudo[57198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:53 compute-1 python3.9[57200]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 08 09:41:54 compute-1 sudo[57198]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:55 compute-1 sudo[57353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcakutjoaqoalluvaermoheboxywilqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186915.3327603-645-238779139761571/AnsiballZ_package_facts.py'
Dec 08 09:41:55 compute-1 sudo[57353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:56 compute-1 python3.9[57355]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 08 09:41:56 compute-1 sudo[57353]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:56 compute-1 sshd-session[57278]: Received disconnect from 95.128.196.223 port 46798:11: Bye Bye [preauth]
Dec 08 09:41:56 compute-1 sshd-session[57278]: Disconnected from authenticating user root 95.128.196.223 port 46798 [preauth]
Dec 08 09:41:57 compute-1 sudo[57505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmkkgdqzfmijklycrtykmdeluaikplw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186917.119134-675-33461586556813/AnsiballZ_stat.py'
Dec 08 09:41:57 compute-1 sudo[57505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:57 compute-1 python3.9[57507]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:41:57 compute-1 sudo[57505]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:57 compute-1 sudo[57630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxklsjgewnwwciqdsytihnsatuvzamt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186917.119134-675-33461586556813/AnsiballZ_copy.py'
Dec 08 09:41:57 compute-1 sudo[57630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:58 compute-1 python3.9[57632]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186917.119134-675-33461586556813/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:41:58 compute-1 sudo[57630]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:58 compute-1 sudo[57784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njpycnxtoxvjekapaevciplpkvqxygaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186918.4912903-721-278806283480161/AnsiballZ_stat.py'
Dec 08 09:41:58 compute-1 sudo[57784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:59 compute-1 python3.9[57786]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:41:59 compute-1 sudo[57784]: pam_unix(sudo:session): session closed for user root
Dec 08 09:41:59 compute-1 sudo[57909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiqnlnykdhuvjgqffswrnwrftngbximm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186918.4912903-721-278806283480161/AnsiballZ_copy.py'
Dec 08 09:41:59 compute-1 sudo[57909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:41:59 compute-1 python3.9[57911]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186918.4912903-721-278806283480161/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:41:59 compute-1 sudo[57909]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:01 compute-1 sudo[58063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huplyklzeotyfdulpxsdgclhkmwhngbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186920.8980713-784-206189101460051/AnsiballZ_lineinfile.py'
Dec 08 09:42:01 compute-1 sudo[58063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:01 compute-1 python3.9[58065]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:01 compute-1 sudo[58063]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:03 compute-1 sudo[58217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgksowonrkbgzdnmxxjjeyteqnqlixqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186922.8011878-829-110514030375775/AnsiballZ_setup.py'
Dec 08 09:42:03 compute-1 sudo[58217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:03 compute-1 python3.9[58219]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 08 09:42:03 compute-1 sudo[58217]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:04 compute-1 sudo[58301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qajaptpdtpiqnmpzwwdqedrxojcnwpny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186922.8011878-829-110514030375775/AnsiballZ_systemd.py'
Dec 08 09:42:04 compute-1 sudo[58301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:04 compute-1 python3.9[58303]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:42:04 compute-1 sudo[58301]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:05 compute-1 sudo[58456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwprhlkhawtwcpknhdhtyonundqbzeym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186925.4522893-877-269126694776382/AnsiballZ_setup.py'
Dec 08 09:42:05 compute-1 sudo[58456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:06 compute-1 python3.9[58458]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 08 09:42:06 compute-1 sudo[58456]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:06 compute-1 sudo[58541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erklmfrssdyajessznjegfkcufywphau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186925.4522893-877-269126694776382/AnsiballZ_systemd.py'
Dec 08 09:42:06 compute-1 sudo[58541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:06 compute-1 python3.9[58543]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 08 09:42:06 compute-1 chronyd[789]: chronyd exiting
Dec 08 09:42:06 compute-1 systemd[1]: Stopping NTP client/server...
Dec 08 09:42:06 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Dec 08 09:42:06 compute-1 systemd[1]: Stopped NTP client/server.
Dec 08 09:42:06 compute-1 systemd[1]: Starting NTP client/server...
Dec 08 09:42:06 compute-1 chronyd[58551]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 08 09:42:06 compute-1 chronyd[58551]: Frequency -28.440 +/- 0.492 ppm read from /var/lib/chrony/drift
Dec 08 09:42:06 compute-1 chronyd[58551]: Loaded seccomp filter (level 2)
Dec 08 09:42:06 compute-1 systemd[1]: Started NTP client/server.
Dec 08 09:42:07 compute-1 sudo[58541]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:07 compute-1 sshd-session[53595]: Connection closed by 192.168.122.30 port 33912
Dec 08 09:42:07 compute-1 sshd-session[53592]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:42:07 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Dec 08 09:42:07 compute-1 systemd[1]: session-13.scope: Consumed 26.694s CPU time.
Dec 08 09:42:07 compute-1 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Dec 08 09:42:07 compute-1 systemd-logind[795]: Removed session 13.
Dec 08 09:42:12 compute-1 sshd-session[58577]: Accepted publickey for zuul from 192.168.122.30 port 44428 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:42:12 compute-1 systemd-logind[795]: New session 14 of user zuul.
Dec 08 09:42:12 compute-1 systemd[1]: Started Session 14 of User zuul.
Dec 08 09:42:12 compute-1 sshd-session[58577]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:42:13 compute-1 sudo[58730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkoxhodgzctghbqrdxztsogqfzzyuqlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186932.9387038-27-188184051302630/AnsiballZ_file.py'
Dec 08 09:42:13 compute-1 sudo[58730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:13 compute-1 python3.9[58732]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:13 compute-1 sudo[58730]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:14 compute-1 sudo[58882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbhziezdoietkugrlrycsnldbcxmbxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186933.8552709-63-29165453836749/AnsiballZ_stat.py'
Dec 08 09:42:14 compute-1 sudo[58882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:14 compute-1 python3.9[58884]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:14 compute-1 sudo[58882]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:15 compute-1 sudo[59005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spzhtoluohdbyxgzelftoncluqjqmdbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186933.8552709-63-29165453836749/AnsiballZ_copy.py'
Dec 08 09:42:15 compute-1 sudo[59005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:15 compute-1 python3.9[59007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186933.8552709-63-29165453836749/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:15 compute-1 sudo[59005]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:15 compute-1 sshd-session[58580]: Connection closed by 192.168.122.30 port 44428
Dec 08 09:42:15 compute-1 sshd-session[58577]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:42:15 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Dec 08 09:42:15 compute-1 systemd[1]: session-14.scope: Consumed 1.738s CPU time.
Dec 08 09:42:15 compute-1 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Dec 08 09:42:15 compute-1 systemd-logind[795]: Removed session 14.
Dec 08 09:42:21 compute-1 sshd-session[59033]: Accepted publickey for zuul from 192.168.122.30 port 34092 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:42:21 compute-1 systemd-logind[795]: New session 15 of user zuul.
Dec 08 09:42:21 compute-1 systemd[1]: Started Session 15 of User zuul.
Dec 08 09:42:21 compute-1 sshd-session[59033]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:42:22 compute-1 python3.9[59187]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:42:23 compute-1 sudo[59341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjzpijxjttmnkuigpecxhsxeohurvmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186942.6638205-60-246132840879773/AnsiballZ_file.py'
Dec 08 09:42:23 compute-1 sudo[59341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:23 compute-1 python3.9[59343]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:23 compute-1 sudo[59341]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:24 compute-1 sudo[59516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdmzoevopziggvxqebwwboqywfpqdiad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186943.6050432-84-38001645266946/AnsiballZ_stat.py'
Dec 08 09:42:24 compute-1 sudo[59516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:24 compute-1 python3.9[59518]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:24 compute-1 sudo[59516]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:24 compute-1 sudo[59639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrdrosplqutaijbhhjbrhvzfraoinqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186943.6050432-84-38001645266946/AnsiballZ_copy.py'
Dec 08 09:42:24 compute-1 sudo[59639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:25 compute-1 python3.9[59641]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765186943.6050432-84-38001645266946/.source.json _original_basename=.9myp7fnn follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:25 compute-1 sudo[59639]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:25 compute-1 sudo[59791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbsnknpseswpskmeeefmrhsqbhuobvhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186945.5536892-153-244488406315008/AnsiballZ_stat.py'
Dec 08 09:42:25 compute-1 sudo[59791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:26 compute-1 python3.9[59793]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:26 compute-1 sudo[59791]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:26 compute-1 sudo[59914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rufssitacpgqjumbdfdafjnlkowqiuyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186945.5536892-153-244488406315008/AnsiballZ_copy.py'
Dec 08 09:42:26 compute-1 sudo[59914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:26 compute-1 python3.9[59916]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186945.5536892-153-244488406315008/.source _original_basename=.m13_1wj9 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:26 compute-1 sudo[59914]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:27 compute-1 sudo[60066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhvuwuopswpwckwlifycuvrlpqcpygzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186947.0564451-201-72263830638799/AnsiballZ_file.py'
Dec 08 09:42:27 compute-1 sudo[60066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:27 compute-1 python3.9[60068]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:42:27 compute-1 sudo[60066]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:28 compute-1 sudo[60218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xauzyilcmoizmdconovszglfawfguury ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186947.8347914-225-98419267337189/AnsiballZ_stat.py'
Dec 08 09:42:28 compute-1 sudo[60218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:28 compute-1 sshd-session[58513]: error: kex_exchange_identification: read: Connection timed out
Dec 08 09:42:28 compute-1 sshd-session[58513]: banner exchange: Connection from 41.216.78.124 port 43996: Connection timed out
Dec 08 09:42:28 compute-1 python3.9[60220]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:28 compute-1 sudo[60218]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:28 compute-1 sudo[60341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cylywmevhwocdoqixhrczdnvouyuxgzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186947.8347914-225-98419267337189/AnsiballZ_copy.py'
Dec 08 09:42:28 compute-1 sudo[60341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:28 compute-1 python3.9[60343]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186947.8347914-225-98419267337189/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:42:28 compute-1 sudo[60341]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:29 compute-1 sudo[60493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgusdanjzubpglskkmcpfwhwxzplklae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186949.10871-225-251088164996610/AnsiballZ_stat.py'
Dec 08 09:42:29 compute-1 sudo[60493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:29 compute-1 python3.9[60495]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:29 compute-1 sudo[60493]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:30 compute-1 sudo[60616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pybdqnzqastruakfolevyhupeabquwtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186949.10871-225-251088164996610/AnsiballZ_copy.py'
Dec 08 09:42:30 compute-1 sudo[60616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:30 compute-1 python3.9[60618]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186949.10871-225-251088164996610/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 08 09:42:30 compute-1 sudo[60616]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:30 compute-1 sudo[60768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vedtdipuernqglfaqffedtebwaxtyboa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186950.5407274-312-246183305738735/AnsiballZ_file.py'
Dec 08 09:42:30 compute-1 sudo[60768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:30 compute-1 python3.9[60770]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:31 compute-1 sudo[60768]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:31 compute-1 sudo[60920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bllzucmgugicqklqvbgeqeapfmjlmnuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186951.2811558-336-207031944224188/AnsiballZ_stat.py'
Dec 08 09:42:31 compute-1 sudo[60920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:31 compute-1 python3.9[60922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:31 compute-1 sudo[60920]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:32 compute-1 sudo[61043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxftcglljdzggwmjgliebimsmlaqxdag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186951.2811558-336-207031944224188/AnsiballZ_copy.py'
Dec 08 09:42:32 compute-1 sudo[61043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:32 compute-1 python3.9[61045]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186951.2811558-336-207031944224188/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:32 compute-1 sudo[61043]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:32 compute-1 sudo[61195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqcmynnuxyrkjvwekkxjuylwszubsfew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186952.51341-381-1410745500401/AnsiballZ_stat.py'
Dec 08 09:42:32 compute-1 sudo[61195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:33 compute-1 python3.9[61197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:33 compute-1 sudo[61195]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:33 compute-1 sudo[61318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyppymrivrhqsjwbcalqoyzhikleipps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186952.51341-381-1410745500401/AnsiballZ_copy.py'
Dec 08 09:42:33 compute-1 sudo[61318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:33 compute-1 python3.9[61320]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186952.51341-381-1410745500401/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:33 compute-1 sudo[61318]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:34 compute-1 sudo[61470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eytfiqujiiaudbbwonpllngdbtowvyvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186953.8097193-426-252209292949003/AnsiballZ_systemd.py'
Dec 08 09:42:34 compute-1 sudo[61470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:34 compute-1 python3.9[61472]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:42:34 compute-1 systemd[1]: Reloading.
Dec 08 09:42:34 compute-1 systemd-rc-local-generator[61500]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:42:34 compute-1 systemd-sysv-generator[61504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:42:35 compute-1 systemd[1]: Reloading.
Dec 08 09:42:35 compute-1 systemd-rc-local-generator[61534]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:42:35 compute-1 systemd-sysv-generator[61538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:42:35 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Dec 08 09:42:35 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Dec 08 09:42:35 compute-1 sudo[61470]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:36 compute-1 sudo[61698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atrfrrbsdzngwjnnnhrprozmefkhbekh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186955.7329056-450-92208340376920/AnsiballZ_stat.py'
Dec 08 09:42:36 compute-1 sudo[61698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:36 compute-1 python3.9[61700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:36 compute-1 sudo[61698]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:36 compute-1 sudo[61821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnxkchjvogneltwiuecelyqnctbyxue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186955.7329056-450-92208340376920/AnsiballZ_copy.py'
Dec 08 09:42:36 compute-1 sudo[61821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:36 compute-1 python3.9[61823]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186955.7329056-450-92208340376920/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:36 compute-1 sudo[61821]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:37 compute-1 sudo[61973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpzszallwonycaowgxikjjkmzfghmjur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186957.088982-495-161086617572334/AnsiballZ_stat.py'
Dec 08 09:42:37 compute-1 sudo[61973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:37 compute-1 python3.9[61975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:37 compute-1 sudo[61973]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:37 compute-1 sudo[62096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lanzsfcbomzdbkicsjnyrrrzaamxhtdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186957.088982-495-161086617572334/AnsiballZ_copy.py'
Dec 08 09:42:37 compute-1 sudo[62096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:38 compute-1 python3.9[62098]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186957.088982-495-161086617572334/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:38 compute-1 sudo[62096]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:38 compute-1 sudo[62248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wukerkzszhdjkqfhvpmrxnofgjeyrdal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186958.4122272-540-94896567177991/AnsiballZ_systemd.py'
Dec 08 09:42:38 compute-1 sudo[62248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:39 compute-1 python3.9[62250]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:42:39 compute-1 systemd[1]: Reloading.
Dec 08 09:42:39 compute-1 systemd-rc-local-generator[62274]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:42:39 compute-1 systemd-sysv-generator[62278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:42:39 compute-1 systemd[1]: Reloading.
Dec 08 09:42:39 compute-1 systemd-sysv-generator[62314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:42:39 compute-1 systemd-rc-local-generator[62309]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:42:39 compute-1 systemd[1]: Starting Create netns directory...
Dec 08 09:42:39 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 08 09:42:39 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 08 09:42:39 compute-1 systemd[1]: Finished Create netns directory.
Dec 08 09:42:39 compute-1 sudo[62248]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:40 compute-1 python3.9[62474]: ansible-ansible.builtin.service_facts Invoked
Dec 08 09:42:40 compute-1 network[62491]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 08 09:42:40 compute-1 network[62492]: 'network-scripts' will be removed from distribution in near future.
Dec 08 09:42:40 compute-1 network[62493]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 08 09:42:44 compute-1 sudo[62753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqagfpmblyzmpgejibajoxruorglzdau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186963.8787792-588-130223013157309/AnsiballZ_systemd.py'
Dec 08 09:42:44 compute-1 sudo[62753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:44 compute-1 python3.9[62755]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:42:44 compute-1 systemd[1]: Reloading.
Dec 08 09:42:44 compute-1 systemd-sysv-generator[62781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:42:44 compute-1 systemd-rc-local-generator[62778]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:42:45 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 08 09:42:45 compute-1 iptables.init[62796]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 08 09:42:45 compute-1 iptables.init[62796]: iptables: Flushing firewall rules: [  OK  ]
Dec 08 09:42:45 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Dec 08 09:42:45 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 08 09:42:45 compute-1 sudo[62753]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:45 compute-1 sudo[62990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkupkbznmadubiewbvtevfltjdjdtmnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186965.6496625-588-276197459178955/AnsiballZ_systemd.py'
Dec 08 09:42:45 compute-1 sudo[62990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:46 compute-1 python3.9[62992]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:42:46 compute-1 sudo[62990]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:47 compute-1 sudo[63146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybkxtjfhlxwbuascummvugepysxscuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186967.5164561-636-127425446974719/AnsiballZ_systemd.py'
Dec 08 09:42:47 compute-1 sudo[63146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:47 compute-1 sshd-session[62994]: Received disconnect from 103.191.92.236 port 41420:11: Bye Bye [preauth]
Dec 08 09:42:47 compute-1 sshd-session[62994]: Disconnected from authenticating user root 103.191.92.236 port 41420 [preauth]
Dec 08 09:42:48 compute-1 python3.9[63148]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:42:48 compute-1 systemd[1]: Reloading.
Dec 08 09:42:48 compute-1 systemd-rc-local-generator[63179]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:42:48 compute-1 systemd-sysv-generator[63184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:42:48 compute-1 systemd[1]: Starting Netfilter Tables...
Dec 08 09:42:48 compute-1 systemd[1]: Finished Netfilter Tables.
Dec 08 09:42:48 compute-1 sudo[63146]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:49 compute-1 sudo[63338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjpnmcfwuvkpmlbjhocjynmcnjkucivy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186969.030328-660-235871849385852/AnsiballZ_command.py'
Dec 08 09:42:49 compute-1 sudo[63338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:49 compute-1 python3.9[63340]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:42:49 compute-1 sudo[63338]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:50 compute-1 sudo[63491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ummmkdxnqryssybrrbvlvrhhpoxismvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186970.299549-702-79692926792886/AnsiballZ_stat.py'
Dec 08 09:42:50 compute-1 sudo[63491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:50 compute-1 python3.9[63493]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:50 compute-1 sudo[63491]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:51 compute-1 sudo[63616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-newixrlrhjooxefwndkdijzmecnzmuwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186970.299549-702-79692926792886/AnsiballZ_copy.py'
Dec 08 09:42:51 compute-1 sudo[63616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:51 compute-1 python3.9[63618]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186970.299549-702-79692926792886/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:51 compute-1 sudo[63616]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:51 compute-1 sudo[63769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wijxllviegzwjxvgtuarrdrbbtyxtbfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186971.6760206-747-228495507346337/AnsiballZ_systemd.py'
Dec 08 09:42:51 compute-1 sudo[63769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:52 compute-1 python3.9[63771]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 08 09:42:52 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Dec 08 09:42:52 compute-1 sshd[1006]: Received SIGHUP; restarting.
Dec 08 09:42:52 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Dec 08 09:42:52 compute-1 sshd[1006]: Server listening on 0.0.0.0 port 22.
Dec 08 09:42:52 compute-1 sshd[1006]: Server listening on :: port 22.
Dec 08 09:42:52 compute-1 sudo[63769]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:52 compute-1 sudo[63925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqsvryyfgaofwwvsqizgdnhjfzjceijn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186972.6307735-771-274975263828643/AnsiballZ_file.py'
Dec 08 09:42:52 compute-1 sudo[63925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:53 compute-1 python3.9[63927]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:53 compute-1 sudo[63925]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:53 compute-1 sudo[64077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjumxcohcvglcxhpvvpbgsdyjstocaha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186973.614271-797-136016639773089/AnsiballZ_stat.py'
Dec 08 09:42:53 compute-1 sudo[64077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:54 compute-1 python3.9[64079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:54 compute-1 sudo[64077]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:54 compute-1 sudo[64200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deohtfiftynzaecuqfemcycbrzymxvcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186973.614271-797-136016639773089/AnsiballZ_copy.py'
Dec 08 09:42:54 compute-1 sudo[64200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:54 compute-1 python3.9[64202]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186973.614271-797-136016639773089/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:54 compute-1 sudo[64200]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:55 compute-1 sudo[64352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgkobyrsnmniownszulmbemcjsaakcsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186975.1730428-849-205375274925964/AnsiballZ_timezone.py'
Dec 08 09:42:55 compute-1 sudo[64352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:55 compute-1 python3.9[64354]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 08 09:42:55 compute-1 systemd[1]: Starting Time & Date Service...
Dec 08 09:42:55 compute-1 systemd[1]: Started Time & Date Service.
Dec 08 09:42:55 compute-1 sudo[64352]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:56 compute-1 sudo[64508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llnrqxzmawetajlbudqnwhczfcntllue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186976.3465269-876-146945341190142/AnsiballZ_file.py'
Dec 08 09:42:56 compute-1 sudo[64508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:56 compute-1 python3.9[64510]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:56 compute-1 sudo[64508]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:57 compute-1 sudo[64660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eelnrombnkenykqwfisxakdubwdnnoti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186977.074779-900-113292699958500/AnsiballZ_stat.py'
Dec 08 09:42:57 compute-1 sudo[64660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:57 compute-1 python3.9[64662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:57 compute-1 sudo[64660]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:57 compute-1 sudo[64783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojrjlbfkapouqoosfsaojvzkwcdwrlqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186977.074779-900-113292699958500/AnsiballZ_copy.py'
Dec 08 09:42:57 compute-1 sudo[64783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:58 compute-1 python3.9[64785]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186977.074779-900-113292699958500/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:58 compute-1 sudo[64783]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:58 compute-1 sudo[64935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzcqgwdqkbrhzcurfombwctzwkuxxifs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186978.627149-945-195432955778562/AnsiballZ_stat.py'
Dec 08 09:42:58 compute-1 sudo[64935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:59 compute-1 python3.9[64937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:42:59 compute-1 sudo[64935]: pam_unix(sudo:session): session closed for user root
Dec 08 09:42:59 compute-1 sudo[65058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcylolxzhxxphahmnpdabuzkwhvwdcpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186978.627149-945-195432955778562/AnsiballZ_copy.py'
Dec 08 09:42:59 compute-1 sudo[65058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:42:59 compute-1 python3.9[65060]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186978.627149-945-195432955778562/.source.yaml _original_basename=.zugm5acf follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:42:59 compute-1 sudo[65058]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:00 compute-1 sudo[65210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obhsyiztvdycxxwyymmxnusywdfvyibq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186979.9559333-990-26098705040016/AnsiballZ_stat.py'
Dec 08 09:43:00 compute-1 sudo[65210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:00 compute-1 python3.9[65212]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:43:00 compute-1 sudo[65210]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:00 compute-1 sudo[65333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijfuresmjcxwuepipcbyoqpbuyzolwen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186979.9559333-990-26098705040016/AnsiballZ_copy.py'
Dec 08 09:43:00 compute-1 sudo[65333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:01 compute-1 python3.9[65335]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186979.9559333-990-26098705040016/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:01 compute-1 sudo[65333]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:01 compute-1 sudo[65485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znsvhadxmlnkwcvddtqdyowxncdaawcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186981.2959883-1035-54474762450565/AnsiballZ_command.py'
Dec 08 09:43:01 compute-1 sudo[65485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:01 compute-1 python3.9[65487]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:43:01 compute-1 sudo[65485]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:02 compute-1 sudo[65638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgcnoljwqgyutlhmuegfyehfzwreprbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186982.3039083-1059-32085670016983/AnsiballZ_command.py'
Dec 08 09:43:02 compute-1 sudo[65638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:02 compute-1 python3.9[65640]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:43:02 compute-1 sudo[65638]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:03 compute-1 sudo[65791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbyfzcvpgdgshcklebxvcgfuujossmjo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765186983.3116505-1083-254195288078380/AnsiballZ_edpm_nftables_from_files.py'
Dec 08 09:43:03 compute-1 sudo[65791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:03 compute-1 python3[65793]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 08 09:43:03 compute-1 sudo[65791]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:05 compute-1 sudo[65943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqitslhcopofckjywvynmtdxsqinppkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186984.8490136-1107-273348106670643/AnsiballZ_stat.py'
Dec 08 09:43:05 compute-1 sudo[65943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:05 compute-1 python3.9[65945]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:43:05 compute-1 sudo[65943]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:05 compute-1 sudo[66066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nojuiruumdpxzunymgtctrihokgqqgmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186984.8490136-1107-273348106670643/AnsiballZ_copy.py'
Dec 08 09:43:05 compute-1 sudo[66066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:05 compute-1 python3.9[66068]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186984.8490136-1107-273348106670643/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:05 compute-1 sudo[66066]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:06 compute-1 sudo[66218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykszjokxevoiwgsnvizrsthzvuiatexz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186986.1480196-1152-77122877711181/AnsiballZ_stat.py'
Dec 08 09:43:06 compute-1 sudo[66218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:06 compute-1 python3.9[66220]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:43:06 compute-1 sudo[66218]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:07 compute-1 sudo[66341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtsrozvcbbrgoknddbdqukvhvuqjrxzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186986.1480196-1152-77122877711181/AnsiballZ_copy.py'
Dec 08 09:43:07 compute-1 sudo[66341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:07 compute-1 python3.9[66343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186986.1480196-1152-77122877711181/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:07 compute-1 sudo[66341]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:08 compute-1 sudo[66493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdrxahwxlruhavaacsfdwvzwatnxywsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186987.7928867-1197-75734689677837/AnsiballZ_stat.py'
Dec 08 09:43:08 compute-1 sudo[66493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:08 compute-1 python3.9[66495]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:43:08 compute-1 sudo[66493]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:08 compute-1 sudo[66616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfrhqvtacagkxwdclbqkdsfimvctonvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186987.7928867-1197-75734689677837/AnsiballZ_copy.py'
Dec 08 09:43:08 compute-1 sudo[66616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:08 compute-1 python3.9[66618]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186987.7928867-1197-75734689677837/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:08 compute-1 sudo[66616]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:09 compute-1 sudo[66770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhnldzeapqoqyeojbgxswpbwjvurdrvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186989.2070327-1242-74223584136647/AnsiballZ_stat.py'
Dec 08 09:43:09 compute-1 sudo[66770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:09 compute-1 python3.9[66772]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:43:09 compute-1 sudo[66770]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:10 compute-1 sudo[66893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfnyyzvzpkvbppjyatnibqpguzasuwch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186989.2070327-1242-74223584136647/AnsiballZ_copy.py'
Dec 08 09:43:10 compute-1 sudo[66893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:10 compute-1 sshd-session[66643]: Invalid user jenkins from 180.76.105.69 port 42918
Dec 08 09:43:10 compute-1 python3.9[66895]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186989.2070327-1242-74223584136647/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:10 compute-1 sudo[66893]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:10 compute-1 sshd-session[66643]: Received disconnect from 180.76.105.69 port 42918:11: Bye Bye [preauth]
Dec 08 09:43:10 compute-1 sshd-session[66643]: Disconnected from invalid user jenkins 180.76.105.69 port 42918 [preauth]
Dec 08 09:43:11 compute-1 sudo[67045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfxnltqrtnnyhodobrdonhnrysjbfngs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186990.6148472-1287-264470672552724/AnsiballZ_stat.py'
Dec 08 09:43:11 compute-1 sudo[67045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:11 compute-1 python3.9[67047]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 08 09:43:11 compute-1 sudo[67045]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:11 compute-1 sudo[67168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-remhfebfgobeolqyohsvwtkfxvigaavg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186990.6148472-1287-264470672552724/AnsiballZ_copy.py'
Dec 08 09:43:11 compute-1 sudo[67168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:11 compute-1 python3.9[67170]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186990.6148472-1287-264470672552724/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:11 compute-1 sudo[67168]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:12 compute-1 sudo[67320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnhsrwygvyrgznnnspnxmizhohcfqrlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186992.2737253-1332-239076690079746/AnsiballZ_file.py'
Dec 08 09:43:12 compute-1 sudo[67320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:12 compute-1 python3.9[67322]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:12 compute-1 sudo[67320]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:13 compute-1 sudo[67472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evbcbivoqgohffzlvhjlqaiuiqlzgjkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186992.992041-1356-94553303672955/AnsiballZ_command.py'
Dec 08 09:43:13 compute-1 sudo[67472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:13 compute-1 python3.9[67474]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:43:13 compute-1 sudo[67472]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:14 compute-1 sudo[67632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rejicdndsfgqoeuklxyhwerocopzcqew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186993.8416975-1380-177104039005420/AnsiballZ_blockinfile.py'
Dec 08 09:43:14 compute-1 sudo[67632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:14 compute-1 python3.9[67634]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:14 compute-1 sudo[67632]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:15 compute-1 sudo[67785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwcmtnozotcmwmvxsipqeepogufmpts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186994.8822255-1407-166114302875811/AnsiballZ_file.py'
Dec 08 09:43:15 compute-1 sudo[67785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:15 compute-1 python3.9[67787]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:15 compute-1 sudo[67785]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:15 compute-1 sudo[67937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcvtdmvbmstkpecltftdbxngacxmiqxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186995.515334-1407-21460350349569/AnsiballZ_file.py'
Dec 08 09:43:15 compute-1 sudo[67937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:16 compute-1 python3.9[67939]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:16 compute-1 sudo[67937]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:16 compute-1 sudo[68089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seszpsadovpxcrufllcacrzhnjbygsvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186996.3099627-1452-169406312447560/AnsiballZ_mount.py'
Dec 08 09:43:16 compute-1 sudo[68089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:16 compute-1 python3.9[68091]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 08 09:43:16 compute-1 sudo[68089]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:17 compute-1 sudo[68242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgzmzablituxnropcwekqayvjpqvbdgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765186997.1207197-1452-62820580486532/AnsiballZ_mount.py'
Dec 08 09:43:17 compute-1 sudo[68242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:17 compute-1 python3.9[68244]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 08 09:43:17 compute-1 sudo[68242]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:18 compute-1 sshd-session[59036]: Connection closed by 192.168.122.30 port 34092
Dec 08 09:43:18 compute-1 sshd-session[59033]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:43:18 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Dec 08 09:43:18 compute-1 systemd[1]: session-15.scope: Consumed 39.131s CPU time.
Dec 08 09:43:18 compute-1 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Dec 08 09:43:18 compute-1 systemd-logind[795]: Removed session 15.
Dec 08 09:43:19 compute-1 sshd-session[68270]: Received disconnect from 95.128.196.223 port 57122:11: Bye Bye [preauth]
Dec 08 09:43:19 compute-1 sshd-session[68270]: Disconnected from authenticating user root 95.128.196.223 port 57122 [preauth]
Dec 08 09:43:23 compute-1 sshd-session[68272]: Accepted publickey for zuul from 192.168.122.30 port 53672 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:43:23 compute-1 systemd-logind[795]: New session 16 of user zuul.
Dec 08 09:43:23 compute-1 systemd[1]: Started Session 16 of User zuul.
Dec 08 09:43:23 compute-1 sshd-session[68272]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:43:24 compute-1 sshd-session[67506]: error: kex_exchange_identification: read: Connection timed out
Dec 08 09:43:24 compute-1 sshd-session[67506]: banner exchange: Connection from 106.13.69.159 port 39970: Connection timed out
Dec 08 09:43:24 compute-1 sudo[68425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gepyrkoxllmgdzfhfkgfffuudtogqdeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187003.5944312-19-149150745638868/AnsiballZ_tempfile.py'
Dec 08 09:43:24 compute-1 sudo[68425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:24 compute-1 python3.9[68427]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 08 09:43:24 compute-1 sudo[68425]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:24 compute-1 sudo[68577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffiipqhgehpxytpyhwhhptokmurskhlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187004.5187535-55-236227682279328/AnsiballZ_stat.py'
Dec 08 09:43:24 compute-1 sudo[68577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:25 compute-1 python3.9[68579]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:43:25 compute-1 sudo[68577]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:25 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 08 09:43:26 compute-1 sudo[68731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoaonvwuvgrtslbyygymjdvoxsajjfcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187005.565834-85-98632992869712/AnsiballZ_setup.py'
Dec 08 09:43:26 compute-1 sudo[68731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:26 compute-1 python3.9[68733]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:43:26 compute-1 sudo[68731]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:27 compute-1 sudo[68883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlplhiygaphllmbzojlhutprppqejwbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187006.7522023-110-185267942085901/AnsiballZ_blockinfile.py'
Dec 08 09:43:27 compute-1 sudo[68883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:27 compute-1 python3.9[68885]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJ8NVpl4t0huV4+T0af6g85GnyyXSuSwbTPUcRC4oID0GzeUJSQ95u2BvKUkl7F6B5EGewEFy3IY9b514xvpbAdhjs7n5SblOcPNUJ+y693K3gdRy/ANBX6zwRxcDMLMyKa8JM7Tdp31exvoQgef6Ep9i62uFn/NfDJtCfGlrN47cRnnIdsYNIImcLTHBGS3hYjDfXiIsNjK3/QjWQXtvDY8RtPZFkdEbVyb7U8G30FzDbr4XI93l9Gr9VRGBtV4lCUgkTnXGFf232VBqxvuyHgk+SXuLKDpTE+BcVxTMwJkyHsksB+UnvpLUbxZddGUIr1vQ4jJ9VJGjVjuztU6Nyje6OrMgs6HBt1mja5G9KkYFxAEQRKP+X4fo6vT0V0DhMJ6aQgUoHTrlvK7dcPY9cNg9D3MonkMsmq0Pnwh8KuCqvB1rIu6tGFXV80vnbpGgq4Gk7mpYr96b41joz7Cx5OBZvmSL5BFvd3CiwI/gncrrztIjmaL6bUmhCxhnnTMc=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhxvi6X/2xlfPUZnHs0RwyPTPsqOO1DYh2NBIbGdlbz
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKw6RojtVQ4sb307auN8wfR/el1N1E7X58nYuMS8W5BYU/mWwfoBlDfdW5gSR8N6MMMTBCtk95axIutM9UTPHjA=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqGrhm6XY5gL0qKV70KB+VWpiiWHOdu0MlsKHGYoCPmup7NRaI53l8P2B2gElQCUWIOrGSqX+krCxY+35BNmEjUpjw2KT+AFloSBIschbqhiwMysoE8FO+lpr/XU7s+rWWR/O4i/v/olXuA8mKbrkuJSfJHx2AdeXiykSViUK48d1CXdPz56NX/f9lJvPo6S96EhJShAVdPDwMFIPPDc321VAJdd0sXRu9K5njusTG2DlBTHNfHQb3XGTuZQcaP266UMa7a/K+w7hsSOGu1m8dZ3PloU3bAZJV3QUDpIzYwRXGO2w1BcvHVS6YvCLutLZqaMHz2KfT8atjyYylLKRl3Lf7OsrLbreyUVDVtlTgASxom/DE18McJlpdfM7gduJWByn5CKIsncToF5DFI/F+o/hi8ffGZWwegtxxhj/Zvo6GkTnvhppR/rClJpfUYWk1ufmryiUXqo/UsNViCnHjxVVqcFBUcJCj0uBpgQdlXPD3AUgiUcCAP9fh9Dk8iwM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKv92xjYFN5l4X2DvW+J//ZkQOtLVnjqyglt5FFEdTjH
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJHSgMad2U38teRVX3WClQWpAI17/0L20etvnwLywKQrzqu+b2F4ZqSpsu2yCFYcfRJoEtDQwSmG5UmhpK3Kadw=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB/zZYe8DfA99Ng/3XEzOPTwv/qjSjlCTbcZi22m+osE/6ubIaQtU6hZZ2UiWc5OMDuUGGTWciB+bOwgu4HPT648N8k8XawJ1ZE3yPo7GhPG4jt7+lRmK+VKR+yqR7V8udNU5cfkL5J4lcxOUNxyrZjEodEovTMNeHctTE33QgcntogqUmaGntfJA3jK4xa/i3INl643DoELTFJLNdvHN1qMJ7v32SIF49fjNuKORX6eXYA2ukSiPZ23COyZNgL9OgpXXceoF6gpYggg5sLTck2S3p07p/GNt0SQSx9Sf2edAFUVg7IJzqEL6+hBheK2L/kpEQmujn3VKQrCJ7fL0wu7ys1rnbA3g/jDT3DqpjOL3n+U9frJKSH9MHAD+TjfOZa7oHLQtdBUSLjkmL5Ph14Nmtryoei9MZrKdhwr1taPd46t1jogP0XyrvkLWJmtiAcVQjubJqiRVVTb813nHgI9Txp2W1RM9V5oMdCRBgWsVuMoB52GcVSG5ACm33/xk=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg0wKCpmeAJBOHJra27vJw1dBiql8GMRNSifIDzjK9p
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIYHdYuKMSpYhFy8rWCrlTlBjprkLeMIvpYpr5DwhaVqN10fCxq9CoQYeZUDQNOCtaemskK9zzUyW1cfqwnaTnI=
                                             create=True mode=0644 path=/tmp/ansible.fr4uoito state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:27 compute-1 sudo[68883]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:28 compute-1 sudo[69035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhsrwrcculdwbyhwgnfurzezaunruytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187007.5636928-134-29568022056852/AnsiballZ_command.py'
Dec 08 09:43:28 compute-1 sudo[69035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:28 compute-1 python3.9[69037]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fr4uoito' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:43:28 compute-1 sudo[69035]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:28 compute-1 sudo[69189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwmvovddxjuhpjldtzbqefhyrigdtaox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187008.4998932-158-137861586845771/AnsiballZ_file.py'
Dec 08 09:43:28 compute-1 sudo[69189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:29 compute-1 python3.9[69191]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.fr4uoito state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:29 compute-1 sudo[69189]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:29 compute-1 sshd-session[68275]: Connection closed by 192.168.122.30 port 53672
Dec 08 09:43:29 compute-1 sshd-session[68272]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:43:29 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Dec 08 09:43:29 compute-1 systemd[1]: session-16.scope: Consumed 3.768s CPU time.
Dec 08 09:43:29 compute-1 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Dec 08 09:43:29 compute-1 systemd-logind[795]: Removed session 16.
Dec 08 09:43:34 compute-1 sshd-session[69216]: Accepted publickey for zuul from 192.168.122.30 port 55768 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:43:34 compute-1 systemd-logind[795]: New session 17 of user zuul.
Dec 08 09:43:34 compute-1 systemd[1]: Started Session 17 of User zuul.
Dec 08 09:43:34 compute-1 sshd-session[69216]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:43:35 compute-1 python3.9[69369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:43:36 compute-1 sudo[69523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riayorqcugmvvztwdxwjlenttclxdips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187016.067307-57-146002552336965/AnsiballZ_systemd.py'
Dec 08 09:43:36 compute-1 sudo[69523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:36 compute-1 python3.9[69525]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 08 09:43:37 compute-1 sudo[69523]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:37 compute-1 sudo[69677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paxzspfjwviubbpqzhttfqyzrxrlsepj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187017.2571769-81-147435924652591/AnsiballZ_systemd.py'
Dec 08 09:43:37 compute-1 sudo[69677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:37 compute-1 python3.9[69679]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 08 09:43:37 compute-1 sudo[69677]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:38 compute-1 sudo[69830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykdqljokqvxopkjexbfpugibsmttugst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187018.260344-108-239924958982917/AnsiballZ_command.py'
Dec 08 09:43:38 compute-1 sudo[69830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:38 compute-1 python3.9[69832]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:43:38 compute-1 sudo[69830]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:39 compute-1 sudo[69983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljclshuyuknuzyiizurrlmkhlmehvxxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187019.1868808-132-275655487954816/AnsiballZ_stat.py'
Dec 08 09:43:39 compute-1 sudo[69983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:39 compute-1 python3.9[69985]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:43:39 compute-1 sudo[69983]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:40 compute-1 sudo[70137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmegnbvvayejeooxsdkvvecjericznpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187020.1282473-156-178756709408233/AnsiballZ_command.py'
Dec 08 09:43:40 compute-1 sudo[70137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:40 compute-1 python3.9[70139]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:43:40 compute-1 sudo[70137]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:41 compute-1 sudo[70292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntcfjaojfurhmtcccfocmhreedvfxhfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187020.8439887-180-133631735264964/AnsiballZ_file.py'
Dec 08 09:43:41 compute-1 sudo[70292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:41 compute-1 python3.9[70294]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:43:41 compute-1 sudo[70292]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:41 compute-1 sshd-session[69219]: Connection closed by 192.168.122.30 port 55768
Dec 08 09:43:41 compute-1 sshd-session[69216]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:43:41 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Dec 08 09:43:41 compute-1 systemd[1]: session-17.scope: Consumed 5.076s CPU time.
Dec 08 09:43:41 compute-1 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Dec 08 09:43:41 compute-1 systemd-logind[795]: Removed session 17.
Dec 08 09:43:47 compute-1 sshd-session[70319]: Accepted publickey for zuul from 192.168.122.30 port 32866 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:43:47 compute-1 systemd-logind[795]: New session 18 of user zuul.
Dec 08 09:43:47 compute-1 systemd[1]: Started Session 18 of User zuul.
Dec 08 09:43:47 compute-1 sshd-session[70319]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:43:48 compute-1 python3.9[70472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:43:49 compute-1 sudo[70626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbyvydneqopiwydpdtidlnsjvipsqqpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187029.2102463-63-187722498486259/AnsiballZ_setup.py'
Dec 08 09:43:49 compute-1 sudo[70626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:49 compute-1 python3.9[70628]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 08 09:43:50 compute-1 sudo[70626]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:50 compute-1 sudo[70710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcvbdezevcsnhpbntnwbzkpkkxobevqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765187029.2102463-63-187722498486259/AnsiballZ_dnf.py'
Dec 08 09:43:50 compute-1 sudo[70710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:43:50 compute-1 python3.9[70712]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 08 09:43:51 compute-1 sudo[70710]: pam_unix(sudo:session): session closed for user root
Dec 08 09:43:52 compute-1 python3.9[70863]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:43:54 compute-1 python3.9[71014]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 08 09:43:55 compute-1 python3.9[71164]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:43:55 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 08 09:43:55 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 08 09:43:56 compute-1 python3.9[71315]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 08 09:43:56 compute-1 sshd-session[70322]: Connection closed by 192.168.122.30 port 32866
Dec 08 09:43:56 compute-1 sshd-session[70319]: pam_unix(sshd:session): session closed for user zuul
Dec 08 09:43:56 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Dec 08 09:43:56 compute-1 systemd[1]: session-18.scope: Consumed 6.259s CPU time.
Dec 08 09:43:56 compute-1 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Dec 08 09:43:56 compute-1 systemd-logind[795]: Removed session 18.
Dec 08 09:44:04 compute-1 sshd-session[71340]: Accepted publickey for zuul from 38.102.83.192 port 50300 ssh2: RSA SHA256:9ILqrOWgZKsXCAP6ek0P69EdElsz9g1+oVZcuSDpYrI
Dec 08 09:44:04 compute-1 systemd-logind[795]: New session 19 of user zuul.
Dec 08 09:44:04 compute-1 systemd[1]: Started Session 19 of User zuul.
Dec 08 09:44:04 compute-1 sshd-session[71340]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:44:04 compute-1 sudo[71416]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfgbuqmrhyeenclyaovphkimtpwmnew ; /usr/bin/python3'
Dec 08 09:44:04 compute-1 sudo[71416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:05 compute-1 useradd[71420]: new group: name=ceph-admin, GID=42478
Dec 08 09:44:05 compute-1 useradd[71420]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 08 09:44:05 compute-1 sudo[71416]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:05 compute-1 sudo[71502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvkhqsdunmqjttyfsxuctxntnkreymyk ; /usr/bin/python3'
Dec 08 09:44:05 compute-1 sudo[71502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:05 compute-1 sudo[71502]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:06 compute-1 sudo[71575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxhrbgojzeuppbhseadjojypodycrjui ; /usr/bin/python3'
Dec 08 09:44:06 compute-1 sudo[71575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:06 compute-1 sudo[71575]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:06 compute-1 sudo[71625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdgacvjgikhfigllcfugwswlsvqnwjbt ; /usr/bin/python3'
Dec 08 09:44:06 compute-1 sudo[71625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:06 compute-1 sudo[71625]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:07 compute-1 sudo[71651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smupcjrhhqhycyzpdygkfhivtagwrwic ; /usr/bin/python3'
Dec 08 09:44:07 compute-1 sudo[71651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:07 compute-1 sudo[71651]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:07 compute-1 sudo[71677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdexamwpqdebuhqykrhczjwnzwjrpwsm ; /usr/bin/python3'
Dec 08 09:44:07 compute-1 sudo[71677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:07 compute-1 sudo[71677]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:08 compute-1 sudo[71703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hloslnlojyskxeapccwascshqrrtggwd ; /usr/bin/python3'
Dec 08 09:44:08 compute-1 sudo[71703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:08 compute-1 sudo[71703]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:08 compute-1 sudo[71781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmbgoaffmmhexyfyqhkorpkazlmonnhq ; /usr/bin/python3'
Dec 08 09:44:08 compute-1 sudo[71781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:08 compute-1 sudo[71781]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:08 compute-1 sudo[71854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxcgoqninqvmyucjopomzfipjnbhgoif ; /usr/bin/python3'
Dec 08 09:44:08 compute-1 sudo[71854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:09 compute-1 sudo[71854]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:09 compute-1 sudo[71958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrwlexmztbkggdqthvpgudwtwfmsglib ; /usr/bin/python3'
Dec 08 09:44:09 compute-1 sudo[71958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:09 compute-1 sudo[71958]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:09 compute-1 sudo[72031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hihavavvktfjmpymmzixrxmplazjqflt ; /usr/bin/python3'
Dec 08 09:44:09 compute-1 sudo[72031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:10 compute-1 sudo[72031]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:10 compute-1 sshd-session[71905]: Received disconnect from 79.32.212.213 port 41728:11: Bye Bye [preauth]
Dec 08 09:44:10 compute-1 sshd-session[71905]: Disconnected from authenticating user root 79.32.212.213 port 41728 [preauth]
Dec 08 09:44:10 compute-1 sudo[72081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znsyvnewmgwqafdudsuejnrmyvpjaohf ; /usr/bin/python3'
Dec 08 09:44:10 compute-1 sudo[72081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:10 compute-1 python3[72083]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:44:11 compute-1 sudo[72081]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:12 compute-1 sudo[72177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoglfwyeqjxmuifzfngjciqexchkpdlo ; /usr/bin/python3'
Dec 08 09:44:12 compute-1 sudo[72177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:12 compute-1 python3[72179]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 08 09:44:13 compute-1 sudo[72177]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:14 compute-1 sudo[72204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpmecxubszkycolzwezxpufhouylpyub ; /usr/bin/python3'
Dec 08 09:44:14 compute-1 sudo[72204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:14 compute-1 python3[72206]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 08 09:44:14 compute-1 sudo[72204]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:14 compute-1 sudo[72230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzofvhubynlstjxvvosdelzqvuwewqww ; /usr/bin/python3'
Dec 08 09:44:14 compute-1 sudo[72230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:14 compute-1 python3[72232]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:44:14 compute-1 kernel: loop: module loaded
Dec 08 09:44:14 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Dec 08 09:44:14 compute-1 sudo[72230]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:14 compute-1 sudo[72265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diljasfqbpwfsmgegyxiauogiixwuwmc ; /usr/bin/python3'
Dec 08 09:44:14 compute-1 sudo[72265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:15 compute-1 python3[72267]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:44:15 compute-1 lvm[72270]: PV /dev/loop3 not used.
Dec 08 09:44:15 compute-1 lvm[72280]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 08 09:44:15 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 08 09:44:15 compute-1 sudo[72265]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:15 compute-1 lvm[72282]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 08 09:44:15 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 08 09:44:15 compute-1 sudo[72359]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcvmtztwehidehzffumamcefhdmxrlsz ; /usr/bin/python3'
Dec 08 09:44:15 compute-1 sudo[72359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:15 compute-1 python3[72361]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 08 09:44:15 compute-1 sudo[72359]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:16 compute-1 sudo[72432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvwiyifhkengpsfxmkdxiezfgumybffk ; /usr/bin/python3'
Dec 08 09:44:16 compute-1 sudo[72432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:16 compute-1 python3[72434]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187055.5255275-36798-188292118552472/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 08 09:44:16 compute-1 sudo[72432]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:16 compute-1 chronyd[58551]: Selected source 142.4.192.253 (pool.ntp.org)
Dec 08 09:44:16 compute-1 sudo[72482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlyqgvbuamcncuzzfkswyukoahohflqe ; /usr/bin/python3'
Dec 08 09:44:16 compute-1 sudo[72482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:44:17 compute-1 python3[72484]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 08 09:44:17 compute-1 systemd[1]: Reloading.
Dec 08 09:44:17 compute-1 systemd-rc-local-generator[72507]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:44:17 compute-1 systemd-sysv-generator[72515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:44:17 compute-1 systemd[1]: Starting Ceph OSD losetup...
Dec 08 09:44:17 compute-1 bash[72526]: /dev/loop3: [64513]:4327971 (/var/lib/ceph-osd-0.img)
Dec 08 09:44:17 compute-1 systemd[1]: Finished Ceph OSD losetup.
Dec 08 09:44:17 compute-1 sudo[72482]: pam_unix(sudo:session): session closed for user root
Dec 08 09:44:17 compute-1 lvm[72527]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 08 09:44:17 compute-1 lvm[72527]: VG ceph_vg0 finished
Dec 08 09:44:19 compute-1 python3[72551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 08 09:44:27 compute-1 sshd-session[72595]: Received disconnect from 103.191.92.236 port 39486:11: Bye Bye [preauth]
Dec 08 09:44:27 compute-1 sshd-session[72595]: Disconnected from authenticating user root 103.191.92.236 port 39486 [preauth]
Dec 08 09:44:42 compute-1 sshd-session[72597]: Received disconnect from 95.128.196.223 port 46694:11: Bye Bye [preauth]
Dec 08 09:44:42 compute-1 sshd-session[72597]: Disconnected from authenticating user root 95.128.196.223 port 46694 [preauth]
Dec 08 09:45:25 compute-1 sshd-session[72600]: Invalid user usuario from 79.32.212.213 port 42758
Dec 08 09:45:25 compute-1 sshd-session[72600]: Received disconnect from 79.32.212.213 port 42758:11: Bye Bye [preauth]
Dec 08 09:45:25 compute-1 sshd-session[72600]: Disconnected from invalid user usuario 79.32.212.213 port 42758 [preauth]
Dec 08 09:45:35 compute-1 sshd-session[72599]: error: kex_exchange_identification: read: Connection timed out
Dec 08 09:45:35 compute-1 sshd-session[72599]: banner exchange: Connection from 120.48.123.76 port 50426: Connection timed out
Dec 08 09:45:43 compute-1 sshd-session[72602]: Accepted publickey for ceph-admin from 192.168.122.100 port 40222 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:43 compute-1 systemd-logind[795]: New session 20 of user ceph-admin.
Dec 08 09:45:43 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Dec 08 09:45:43 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 08 09:45:43 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 08 09:45:43 compute-1 systemd[1]: Starting User Manager for UID 42477...
Dec 08 09:45:43 compute-1 systemd[72606]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:43 compute-1 systemd[72606]: Queued start job for default target Main User Target.
Dec 08 09:45:43 compute-1 sshd-session[72612]: Accepted publickey for ceph-admin from 192.168.122.100 port 40228 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:43 compute-1 systemd[72606]: Created slice User Application Slice.
Dec 08 09:45:43 compute-1 systemd[72606]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 08 09:45:43 compute-1 systemd[72606]: Started Daily Cleanup of User's Temporary Directories.
Dec 08 09:45:43 compute-1 systemd[72606]: Reached target Paths.
Dec 08 09:45:43 compute-1 systemd[72606]: Reached target Timers.
Dec 08 09:45:43 compute-1 systemd[72606]: Starting D-Bus User Message Bus Socket...
Dec 08 09:45:43 compute-1 systemd-logind[795]: New session 22 of user ceph-admin.
Dec 08 09:45:43 compute-1 systemd[72606]: Starting Create User's Volatile Files and Directories...
Dec 08 09:45:43 compute-1 systemd[72606]: Listening on D-Bus User Message Bus Socket.
Dec 08 09:45:43 compute-1 systemd[72606]: Reached target Sockets.
Dec 08 09:45:43 compute-1 systemd[72606]: Finished Create User's Volatile Files and Directories.
Dec 08 09:45:43 compute-1 systemd[72606]: Reached target Basic System.
Dec 08 09:45:43 compute-1 systemd[72606]: Reached target Main User Target.
Dec 08 09:45:43 compute-1 systemd[72606]: Startup finished in 143ms.
Dec 08 09:45:43 compute-1 systemd[1]: Started User Manager for UID 42477.
Dec 08 09:45:43 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Dec 08 09:45:43 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Dec 08 09:45:43 compute-1 sshd-session[72602]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:43 compute-1 sshd-session[72612]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:43 compute-1 sudo[72627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:45:43 compute-1 sudo[72627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:43 compute-1 sudo[72627]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:43 compute-1 sshd-session[72652]: Accepted publickey for ceph-admin from 192.168.122.100 port 40230 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:43 compute-1 systemd-logind[795]: New session 23 of user ceph-admin.
Dec 08 09:45:43 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Dec 08 09:45:44 compute-1 sshd-session[72652]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:44 compute-1 sudo[72656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Dec 08 09:45:44 compute-1 sudo[72656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:44 compute-1 sudo[72656]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:44 compute-1 sshd-session[72681]: Accepted publickey for ceph-admin from 192.168.122.100 port 40234 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:44 compute-1 systemd-logind[795]: New session 24 of user ceph-admin.
Dec 08 09:45:44 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Dec 08 09:45:44 compute-1 sshd-session[72681]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:44 compute-1 sudo[72685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 08 09:45:44 compute-1 sudo[72685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:44 compute-1 sudo[72685]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:44 compute-1 sshd-session[72710]: Accepted publickey for ceph-admin from 192.168.122.100 port 40238 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:44 compute-1 systemd-logind[795]: New session 25 of user ceph-admin.
Dec 08 09:45:44 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Dec 08 09:45:44 compute-1 sshd-session[72710]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:44 compute-1 sudo[72714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:45:44 compute-1 sudo[72714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:44 compute-1 sudo[72714]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:45 compute-1 sshd-session[72739]: Accepted publickey for ceph-admin from 192.168.122.100 port 40244 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:45 compute-1 systemd-logind[795]: New session 26 of user ceph-admin.
Dec 08 09:45:45 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Dec 08 09:45:45 compute-1 sshd-session[72739]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:45 compute-1 sudo[72743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:45:45 compute-1 sudo[72743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:45 compute-1 sudo[72743]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:45 compute-1 sshd-session[72768]: Accepted publickey for ceph-admin from 192.168.122.100 port 40252 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:45 compute-1 systemd-logind[795]: New session 27 of user ceph-admin.
Dec 08 09:45:45 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Dec 08 09:45:45 compute-1 sshd-session[72768]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:45 compute-1 sudo[72772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 08 09:45:45 compute-1 sudo[72772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:45 compute-1 sudo[72772]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:45 compute-1 sshd-session[72797]: Accepted publickey for ceph-admin from 192.168.122.100 port 40268 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:45 compute-1 systemd-logind[795]: New session 28 of user ceph-admin.
Dec 08 09:45:45 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Dec 08 09:45:45 compute-1 sshd-session[72797]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:46 compute-1 sudo[72801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:45:46 compute-1 sudo[72801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:46 compute-1 sudo[72801]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:46 compute-1 sshd-session[72826]: Accepted publickey for ceph-admin from 192.168.122.100 port 40284 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:46 compute-1 systemd-logind[795]: New session 29 of user ceph-admin.
Dec 08 09:45:46 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Dec 08 09:45:46 compute-1 sshd-session[72826]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:46 compute-1 sudo[72830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 08 09:45:46 compute-1 sudo[72830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:46 compute-1 sudo[72830]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:46 compute-1 sshd-session[72855]: Accepted publickey for ceph-admin from 192.168.122.100 port 40296 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:46 compute-1 systemd-logind[795]: New session 30 of user ceph-admin.
Dec 08 09:45:46 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Dec 08 09:45:46 compute-1 sshd-session[72855]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:47 compute-1 sshd-session[72882]: Accepted publickey for ceph-admin from 192.168.122.100 port 40302 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:47 compute-1 systemd-logind[795]: New session 31 of user ceph-admin.
Dec 08 09:45:47 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Dec 08 09:45:47 compute-1 sshd-session[72882]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:48 compute-1 sudo[72886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 08 09:45:48 compute-1 sudo[72886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:48 compute-1 sudo[72886]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:48 compute-1 sshd-session[72911]: Accepted publickey for ceph-admin from 192.168.122.100 port 40316 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:45:48 compute-1 systemd-logind[795]: New session 32 of user ceph-admin.
Dec 08 09:45:48 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Dec 08 09:45:48 compute-1 sshd-session[72911]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:45:48 compute-1 sudo[72915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Dec 08 09:45:48 compute-1 sudo[72915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:48 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:45:48 compute-1 sudo[72915]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:48 compute-1 sudo[72960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:45:48 compute-1 sudo[72960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:48 compute-1 sudo[72960]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:48 compute-1 sudo[72986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 08 09:45:48 compute-1 sudo[72986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:45:49 compute-1 sudo[72986]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:49 compute-1 sudo[73032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:45:49 compute-1 sudo[73032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:49 compute-1 sudo[73032]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:49 compute-1 sudo[73057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 08 09:45:49 compute-1 sudo[73057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:45:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:45:49 compute-1 sudo[73057]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:49 compute-1 sudo[73119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:45:49 compute-1 sudo[73119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:49 compute-1 sudo[73119]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:50 compute-1 sudo[73144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 08 09:45:50 compute-1 sudo[73144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:50 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73181 (sysctl)
Dec 08 09:45:50 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:45:50 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 08 09:45:50 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 08 09:45:51 compute-1 sudo[73144]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:51 compute-1 sudo[73203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:45:51 compute-1 sudo[73203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:51 compute-1 sudo[73203]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:51 compute-1 sudo[73228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 08 09:45:51 compute-1 sudo[73228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:51 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:45:51 compute-1 sudo[73228]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:51 compute-1 sudo[73271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:45:51 compute-1 sudo[73271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:51 compute-1 sudo[73271]: pam_unix(sudo:session): session closed for user root
Dec 08 09:45:51 compute-1 sudo[73296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -- inventory --format=json-pretty --filter-for-batch
Dec 08 09:45:51 compute-1 sudo[73296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:45:51 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:45:52 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:45:54 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3643736898-lower\x2dmapped.mount: Deactivated successfully.
Dec 08 09:46:07 compute-1 sshd-session[73418]: Invalid user terastar from 95.128.196.223 port 35746
Dec 08 09:46:07 compute-1 sshd-session[73418]: Received disconnect from 95.128.196.223 port 35746:11: Bye Bye [preauth]
Dec 08 09:46:07 compute-1 sshd-session[73418]: Disconnected from invalid user terastar 95.128.196.223 port 35746 [preauth]
Dec 08 09:46:09 compute-1 podman[73357]: 2025-12-08 09:46:09.795811409 +0000 UTC m=+17.701026841 container create 28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:46:09 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 08 09:46:09 compute-1 systemd[1]: Started libpod-conmon-28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774.scope.
Dec 08 09:46:09 compute-1 podman[73357]: 2025-12-08 09:46:09.764376666 +0000 UTC m=+17.669592088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:09 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:09 compute-1 podman[73357]: 2025-12-08 09:46:09.967099942 +0000 UTC m=+17.872315424 container init 28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hellman, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 08 09:46:09 compute-1 podman[73357]: 2025-12-08 09:46:09.983539314 +0000 UTC m=+17.888754736 container start 28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hellman, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 08 09:46:09 compute-1 podman[73357]: 2025-12-08 09:46:09.989237038 +0000 UTC m=+17.894452460 container attach 28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:46:09 compute-1 clever_hellman[73421]: 167 167
Dec 08 09:46:09 compute-1 systemd[1]: libpod-28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774.scope: Deactivated successfully.
Dec 08 09:46:09 compute-1 podman[73357]: 2025-12-08 09:46:09.994542361 +0000 UTC m=+17.899757763 container died 28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hellman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec 08 09:46:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-0bb4c0a2993d152bcd73ee2656e86cc6c656d35df06204b160e7fc09f8b02c42-merged.mount: Deactivated successfully.
Dec 08 09:46:10 compute-1 podman[73357]: 2025-12-08 09:46:10.039427991 +0000 UTC m=+17.944643393 container remove 28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hellman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:46:10 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:10 compute-1 systemd[1]: libpod-conmon-28de710e76243c26a601f31b50508862356815d2b5e94466703b6ec127ee9774.scope: Deactivated successfully.
Dec 08 09:46:10 compute-1 podman[73445]: 2025-12-08 09:46:10.277694649 +0000 UTC m=+0.058736679 container create bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:10 compute-1 systemd[1]: Started libpod-conmon-bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668.scope.
Dec 08 09:46:10 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e729fd142371a47264500ae8d006496774a7a5dc24357d2c83ac8f80d1048a69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e729fd142371a47264500ae8d006496774a7a5dc24357d2c83ac8f80d1048a69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:10 compute-1 podman[73445]: 2025-12-08 09:46:10.34523586 +0000 UTC m=+0.126277930 container init bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:10 compute-1 podman[73445]: 2025-12-08 09:46:10.352482398 +0000 UTC m=+0.133524448 container start bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 08 09:46:10 compute-1 podman[73445]: 2025-12-08 09:46:10.259251459 +0000 UTC m=+0.040293509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:10 compute-1 podman[73445]: 2025-12-08 09:46:10.356822703 +0000 UTC m=+0.137864743 container attach bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 08 09:46:11 compute-1 nice_turing[73461]: [
Dec 08 09:46:11 compute-1 nice_turing[73461]:     {
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "available": false,
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "being_replaced": false,
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "ceph_device_lvm": false,
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "lsm_data": {},
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "lvs": [],
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "path": "/dev/sr0",
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "rejected_reasons": [
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "Has a FileSystem",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "Insufficient space (<5GB)"
Dec 08 09:46:11 compute-1 nice_turing[73461]:         ],
Dec 08 09:46:11 compute-1 nice_turing[73461]:         "sys_api": {
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "actuators": null,
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "device_nodes": [
Dec 08 09:46:11 compute-1 nice_turing[73461]:                 "sr0"
Dec 08 09:46:11 compute-1 nice_turing[73461]:             ],
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "devname": "sr0",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "human_readable_size": "482.00 KB",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "id_bus": "ata",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "model": "QEMU DVD-ROM",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "nr_requests": "2",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "parent": "/dev/sr0",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "partitions": {},
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "path": "/dev/sr0",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "removable": "1",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "rev": "2.5+",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "ro": "0",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "rotational": "1",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "sas_address": "",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "sas_device_handle": "",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "scheduler_mode": "mq-deadline",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "sectors": 0,
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "sectorsize": "2048",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "size": 493568.0,
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "support_discard": "2048",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "type": "disk",
Dec 08 09:46:11 compute-1 nice_turing[73461]:             "vendor": "QEMU"
Dec 08 09:46:11 compute-1 nice_turing[73461]:         }
Dec 08 09:46:11 compute-1 nice_turing[73461]:     }
Dec 08 09:46:11 compute-1 nice_turing[73461]: ]
Dec 08 09:46:11 compute-1 systemd[1]: libpod-bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668.scope: Deactivated successfully.
Dec 08 09:46:11 compute-1 podman[73445]: 2025-12-08 09:46:11.150486254 +0000 UTC m=+0.931528284 container died bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_turing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 08 09:46:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-e729fd142371a47264500ae8d006496774a7a5dc24357d2c83ac8f80d1048a69-merged.mount: Deactivated successfully.
Dec 08 09:46:11 compute-1 podman[73445]: 2025-12-08 09:46:11.191321138 +0000 UTC m=+0.972363188 container remove bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 08 09:46:11 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:11 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:11 compute-1 systemd[1]: libpod-conmon-bec9d12abaf6bb3fa7854e112af772513cd787a8bf34b019900792c971447668.scope: Deactivated successfully.
Dec 08 09:46:11 compute-1 sudo[73296]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:46:11 compute-1 sudo[74346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74346]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:46:11 compute-1 sudo[74371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74371]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:46:11 compute-1 sudo[74396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74396]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:11 compute-1 sudo[74421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74421]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:46:11 compute-1 sudo[74446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74446]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:46:11 compute-1 sudo[74494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74494]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:46:11 compute-1 sudo[74519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74519]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 08 09:46:11 compute-1 sudo[74544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74544]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:11 compute-1 sudo[74569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:46:11 compute-1 sudo[74569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:11 compute-1 sudo[74569]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:46:12 compute-1 sudo[74594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74594]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:46:12 compute-1 sudo[74619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74619]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:12 compute-1 sudo[74644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74644]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:46:12 compute-1 sudo[74669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74669]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sshd-session[73479]: Received disconnect from 103.191.92.236 port 39316:11: Bye Bye [preauth]
Dec 08 09:46:12 compute-1 sshd-session[73479]: Disconnected from authenticating user root 103.191.92.236 port 39316 [preauth]
Dec 08 09:46:12 compute-1 sudo[74717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:46:12 compute-1 sudo[74717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74717]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:46:12 compute-1 sudo[74742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74742]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:46:12 compute-1 sudo[74767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74767]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:46:12 compute-1 sudo[74792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74792]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:46:12 compute-1 sudo[74817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74817]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:46:12 compute-1 sudo[74842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74842]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:12 compute-1 sudo[74867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:12 compute-1 sudo[74867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:12 compute-1 sudo[74867]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[74892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:46:13 compute-1 sudo[74892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[74892]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[74940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:46:13 compute-1 sudo[74940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[74940]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[74965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:46:13 compute-1 sudo[74965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[74965]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[74990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 08 09:46:13 compute-1 sudo[74990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[74990]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[75015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:46:13 compute-1 sudo[75015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[75015]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[75040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:46:13 compute-1 sudo[75040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[75040]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[75065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:46:13 compute-1 sudo[75065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[75065]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[75090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:13 compute-1 sudo[75090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[75090]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[75115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:46:13 compute-1 sudo[75115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[75115]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[75163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:46:13 compute-1 sudo[75163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[75163]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[75189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:46:13 compute-1 sudo[75189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[75189]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:13 compute-1 sudo[75214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:46:13 compute-1 sudo[75214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:13 compute-1 sudo[75214]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:14 compute-1 sudo[75239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:46:14 compute-1 sudo[75239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:14 compute-1 sudo[75239]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:14 compute-1 sudo[75264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:14 compute-1 sudo[75264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:14 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:14 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:14 compute-1 podman[75330]: 2025-12-08 09:46:14.557326121 +0000 UTC m=+0.023814706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:14 compute-1 podman[75330]: 2025-12-08 09:46:14.747348874 +0000 UTC m=+0.213837459 container create 6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_driscoll, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:46:14 compute-1 systemd[1]: Started libpod-conmon-6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525.scope.
Dec 08 09:46:14 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:14 compute-1 podman[75330]: 2025-12-08 09:46:14.847074229 +0000 UTC m=+0.313562844 container init 6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_driscoll, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:46:14 compute-1 podman[75330]: 2025-12-08 09:46:14.859887887 +0000 UTC m=+0.326376502 container start 6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_driscoll, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:46:14 compute-1 podman[75330]: 2025-12-08 09:46:14.864364196 +0000 UTC m=+0.330852781 container attach 6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 08 09:46:14 compute-1 focused_driscoll[75346]: 167 167
Dec 08 09:46:14 compute-1 systemd[1]: libpod-6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525.scope: Deactivated successfully.
Dec 08 09:46:14 compute-1 podman[75351]: 2025-12-08 09:46:14.939055632 +0000 UTC m=+0.044326465 container died 6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 08 09:46:14 compute-1 podman[75351]: 2025-12-08 09:46:14.983321405 +0000 UTC m=+0.088592158 container remove 6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_driscoll, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:46:14 compute-1 systemd[1]: libpod-conmon-6566aec0de2d5dfb3d9e8804f49b2ba6fa01ba0abee3bf2c69b4ee9068586525.scope: Deactivated successfully.
Dec 08 09:46:15 compute-1 systemd[1]: Reloading.
Dec 08 09:46:15 compute-1 systemd-sysv-generator[75394]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:15 compute-1 systemd-rc-local-generator[75391]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:15 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:15 compute-1 systemd[1]: Reloading.
Dec 08 09:46:15 compute-1 systemd-rc-local-generator[75432]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:15 compute-1 systemd-sysv-generator[75435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:15 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Dec 08 09:46:15 compute-1 systemd[1]: Reloading.
Dec 08 09:46:15 compute-1 systemd-sysv-generator[75470]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:15 compute-1 systemd-rc-local-generator[75465]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:15 compute-1 systemd[1]: Reached target Ceph cluster ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:46:15 compute-1 systemd[1]: Reloading.
Dec 08 09:46:15 compute-1 systemd-sysv-generator[75503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:15 compute-1 systemd-rc-local-generator[75500]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:16 compute-1 systemd[1]: Reloading.
Dec 08 09:46:16 compute-1 systemd-rc-local-generator[75546]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:16 compute-1 systemd-sysv-generator[75549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:16 compute-1 systemd[1]: Created slice Slice /system/ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:46:16 compute-1 systemd[1]: Reached target System Time Set.
Dec 08 09:46:16 compute-1 systemd[1]: Reached target System Time Synchronized.
Dec 08 09:46:16 compute-1 systemd[1]: Starting Ceph crash.compute-1 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:46:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 08 09:46:16 compute-1 podman[75605]: 2025-12-08 09:46:16.63933158 +0000 UTC m=+0.065860964 container create 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 08 09:46:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6384b3317ac5f26fb4585c2c786290a2fb4276d1adb089a0274f55f51d817421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6384b3317ac5f26fb4585c2c786290a2fb4276d1adb089a0274f55f51d817421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6384b3317ac5f26fb4585c2c786290a2fb4276d1adb089a0274f55f51d817421/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:16 compute-1 podman[75605]: 2025-12-08 09:46:16.706164601 +0000 UTC m=+0.132694035 container init 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 08 09:46:16 compute-1 podman[75605]: 2025-12-08 09:46:16.617798651 +0000 UTC m=+0.044328075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:16 compute-1 podman[75605]: 2025-12-08 09:46:16.712792611 +0000 UTC m=+0.139322035 container start 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:46:16 compute-1 bash[75605]: 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102
Dec 08 09:46:16 compute-1 systemd[1]: Started Ceph crash.compute-1 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:46:16 compute-1 sudo[75264]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: 2025-12-08T09:46:16.901+0000 7fbfb3fff640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: 2025-12-08T09:46:16.901+0000 7fbfb3fff640 -1 AuthRegistry(0x7fbfb40698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: 2025-12-08T09:46:16.903+0000 7fbfb3fff640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: 2025-12-08T09:46:16.903+0000 7fbfb3fff640 -1 AuthRegistry(0x7fbfb3ffdff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: 2025-12-08T09:46:16.906+0000 7fbfb2ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: 2025-12-08T09:46:16.906+0000 7fbfb3fff640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 08 09:46:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1[75619]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 08 09:46:16 compute-1 sudo[75627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:46:16 compute-1 sudo[75627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:16 compute-1 sudo[75627]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:17 compute-1 sudo[75662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 08 09:46:17 compute-1 sudo[75662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:17 compute-1 podman[75729]: 2025-12-08 09:46:17.394430373 +0000 UTC m=+0.047373313 container create 502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chebyshev, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:17 compute-1 systemd[1]: Started libpod-conmon-502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190.scope.
Dec 08 09:46:17 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:17 compute-1 podman[75729]: 2025-12-08 09:46:17.461875871 +0000 UTC m=+0.114818811 container init 502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 08 09:46:17 compute-1 podman[75729]: 2025-12-08 09:46:17.468147761 +0000 UTC m=+0.121090681 container start 502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chebyshev, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 08 09:46:17 compute-1 podman[75729]: 2025-12-08 09:46:17.37519224 +0000 UTC m=+0.028135190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:17 compute-1 podman[75729]: 2025-12-08 09:46:17.470997683 +0000 UTC m=+0.123940643 container attach 502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:17 compute-1 confident_chebyshev[75745]: 167 167
Dec 08 09:46:17 compute-1 systemd[1]: libpod-502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190.scope: Deactivated successfully.
Dec 08 09:46:17 compute-1 conmon[75745]: conmon 502000778b9092b76af1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190.scope/container/memory.events
Dec 08 09:46:17 compute-1 podman[75729]: 2025-12-08 09:46:17.474842084 +0000 UTC m=+0.127785034 container died 502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chebyshev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 08 09:46:17 compute-1 podman[75729]: 2025-12-08 09:46:17.525542631 +0000 UTC m=+0.178485581 container remove 502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_chebyshev, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:46:17 compute-1 systemd[1]: libpod-conmon-502000778b9092b76af1bdb5a93c6433bd7909a9398aa25974166e4ca55bf190.scope: Deactivated successfully.
Dec 08 09:46:17 compute-1 podman[75769]: 2025-12-08 09:46:17.691239033 +0000 UTC m=+0.056258358 container create 2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_murdock, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 08 09:46:17 compute-1 systemd[1]: Started libpod-conmon-2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb.scope.
Dec 08 09:46:17 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecb53944163c3744ae7380688bcd46ce8a55ed9099085295dff86bbc0d9cfae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecb53944163c3744ae7380688bcd46ce8a55ed9099085295dff86bbc0d9cfae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecb53944163c3744ae7380688bcd46ce8a55ed9099085295dff86bbc0d9cfae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecb53944163c3744ae7380688bcd46ce8a55ed9099085295dff86bbc0d9cfae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecb53944163c3744ae7380688bcd46ce8a55ed9099085295dff86bbc0d9cfae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:17 compute-1 podman[75769]: 2025-12-08 09:46:17.663430464 +0000 UTC m=+0.028449839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:17 compute-1 podman[75769]: 2025-12-08 09:46:17.767107994 +0000 UTC m=+0.132127409 container init 2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 08 09:46:17 compute-1 podman[75769]: 2025-12-08 09:46:17.776269387 +0000 UTC m=+0.141288702 container start 2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 08 09:46:17 compute-1 podman[75769]: 2025-12-08 09:46:17.779258313 +0000 UTC m=+0.144277738 container attach 2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: --> passed data devices: 0 physical, 1 LVM
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c550a2b3-dc83-454a-a82b-745064d6ae84
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:18 compute-1 lvm[75846]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 08 09:46:18 compute-1 lvm[75846]: VG ceph_vg0 finished
Dec 08 09:46:18 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 08 09:46:19 compute-1 thirsty_murdock[75785]:  stderr: got monmap epoch 1
Dec 08 09:46:19 compute-1 thirsty_murdock[75785]: --> Creating keyring file for osd.0
Dec 08 09:46:19 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 08 09:46:19 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 08 09:46:19 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c550a2b3-dc83-454a-a82b-745064d6ae84 --setuser ceph --setgroup ceph
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]:  stderr: 2025-12-08T09:46:19.365+0000 7f15b23bc740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]:  stderr: 2025-12-08T09:46:19.630+0000 7f15b23bc740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 08 09:46:22 compute-1 thirsty_murdock[75785]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 08 09:46:22 compute-1 systemd[1]: libpod-2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb.scope: Deactivated successfully.
Dec 08 09:46:22 compute-1 systemd[1]: libpod-2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb.scope: Consumed 2.197s CPU time.
Dec 08 09:46:22 compute-1 podman[75769]: 2025-12-08 09:46:22.869547123 +0000 UTC m=+5.234566468 container died 2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_murdock, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:46:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-5ecb53944163c3744ae7380688bcd46ce8a55ed9099085295dff86bbc0d9cfae-merged.mount: Deactivated successfully.
Dec 08 09:46:22 compute-1 podman[75769]: 2025-12-08 09:46:22.907531375 +0000 UTC m=+5.272550700 container remove 2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:46:22 compute-1 systemd[1]: libpod-conmon-2453bd95d82d41b6d65df421bf3d5017dfcb8d9172f862c6d0b3f8bd409c4cdb.scope: Deactivated successfully.
Dec 08 09:46:22 compute-1 sudo[75662]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:23 compute-1 sudo[76769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:46:23 compute-1 sudo[76769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:23 compute-1 sudo[76769]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:23 compute-1 sudo[76794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -- lvm list --format json
Dec 08 09:46:23 compute-1 sudo[76794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:23 compute-1 podman[76859]: 2025-12-08 09:46:23.520606806 +0000 UTC m=+0.050689538 container create bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_banzai, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 08 09:46:23 compute-1 systemd[1]: Started libpod-conmon-bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051.scope.
Dec 08 09:46:23 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:23 compute-1 podman[76859]: 2025-12-08 09:46:23.495378461 +0000 UTC m=+0.025461243 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:23 compute-1 podman[76859]: 2025-12-08 09:46:23.600340438 +0000 UTC m=+0.130423140 container init bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 08 09:46:23 compute-1 podman[76859]: 2025-12-08 09:46:23.607545135 +0000 UTC m=+0.137627827 container start bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 08 09:46:23 compute-1 podman[76859]: 2025-12-08 09:46:23.61085713 +0000 UTC m=+0.140939832 container attach bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_banzai, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 08 09:46:23 compute-1 jovial_banzai[76875]: 167 167
Dec 08 09:46:23 compute-1 systemd[1]: libpod-bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051.scope: Deactivated successfully.
Dec 08 09:46:23 compute-1 podman[76859]: 2025-12-08 09:46:23.613673151 +0000 UTC m=+0.143755863 container died bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 08 09:46:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-fd7eaf6875b917f1be9f9e89f71ccc2dbcab752aa7bebebf6a94bbc7b9a060cb-merged.mount: Deactivated successfully.
Dec 08 09:46:23 compute-1 podman[76859]: 2025-12-08 09:46:23.645538037 +0000 UTC m=+0.175620749 container remove bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 08 09:46:23 compute-1 systemd[1]: libpod-conmon-bd2be47591c0d75d2a869db0bd9362c549ae4480e8dff073679be4d39bcda051.scope: Deactivated successfully.
Dec 08 09:46:23 compute-1 podman[76899]: 2025-12-08 09:46:23.794703344 +0000 UTC m=+0.037998953 container create 0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 08 09:46:23 compute-1 systemd[1]: Started libpod-conmon-0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63.scope.
Dec 08 09:46:23 compute-1 podman[76899]: 2025-12-08 09:46:23.777558881 +0000 UTC m=+0.020854500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:23 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb38398d13f8dde76db7294d1e75fe6c5dc8784639512d1c5aaeb1f02d12a0d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb38398d13f8dde76db7294d1e75fe6c5dc8784639512d1c5aaeb1f02d12a0d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb38398d13f8dde76db7294d1e75fe6c5dc8784639512d1c5aaeb1f02d12a0d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb38398d13f8dde76db7294d1e75fe6c5dc8784639512d1c5aaeb1f02d12a0d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:23 compute-1 podman[76899]: 2025-12-08 09:46:23.905254761 +0000 UTC m=+0.148550410 container init 0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 08 09:46:23 compute-1 podman[76899]: 2025-12-08 09:46:23.911337476 +0000 UTC m=+0.154633075 container start 0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:23 compute-1 podman[76899]: 2025-12-08 09:46:23.914778365 +0000 UTC m=+0.158073964 container attach 0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]: {
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:     "0": [
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:         {
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "devices": [
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "/dev/loop3"
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             ],
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "lv_name": "ceph_lv0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "lv_size": "21470642176",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7Qwryf-1ScL-wMHy-UIL1-SUpe-P9qV-uwHVpY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ceb838ef-9d5d-54e4-bddb-2f01adce2ad4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c550a2b3-dc83-454a-a82b-745064d6ae84,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "lv_uuid": "7Qwryf-1ScL-wMHy-UIL1-SUpe-P9qV-uwHVpY",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "name": "ceph_lv0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "tags": {
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.block_uuid": "7Qwryf-1ScL-wMHy-UIL1-SUpe-P9qV-uwHVpY",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.cephx_lockbox_secret": "",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.cluster_fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.cluster_name": "ceph",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.crush_device_class": "",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.encrypted": "0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.osd_fsid": "c550a2b3-dc83-454a-a82b-745064d6ae84",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.osd_id": "0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.type": "block",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.vdo": "0",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:                 "ceph.with_tpm": "0"
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             },
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "type": "block",
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:             "vg_name": "ceph_vg0"
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:         }
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]:     ]
Dec 08 09:46:24 compute-1 wonderful_mcclintock[76915]: }
Dec 08 09:46:24 compute-1 systemd[1]: libpod-0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63.scope: Deactivated successfully.
Dec 08 09:46:24 compute-1 podman[76899]: 2025-12-08 09:46:24.212694238 +0000 UTC m=+0.455989877 container died 0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:46:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-bb38398d13f8dde76db7294d1e75fe6c5dc8784639512d1c5aaeb1f02d12a0d2-merged.mount: Deactivated successfully.
Dec 08 09:46:24 compute-1 podman[76899]: 2025-12-08 09:46:24.272132746 +0000 UTC m=+0.515428355 container remove 0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcclintock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 08 09:46:24 compute-1 systemd[1]: libpod-conmon-0c9690ce4ee8aeca8e6170ad7c388cd221ab6f4ac39c39ab31b9be401f6aac63.scope: Deactivated successfully.
Dec 08 09:46:24 compute-1 sudo[76794]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:24 compute-1 sudo[76935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:46:24 compute-1 sudo[76935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:24 compute-1 sudo[76935]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:24 compute-1 sudo[76960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:24 compute-1 sudo[76960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:24 compute-1 podman[77023]: 2025-12-08 09:46:24.923184098 +0000 UTC m=+0.043731718 container create a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_wiles, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 08 09:46:24 compute-1 systemd[1]: Started libpod-conmon-a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81.scope.
Dec 08 09:46:24 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:24 compute-1 podman[77023]: 2025-12-08 09:46:24.903870943 +0000 UTC m=+0.024418603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:25 compute-1 podman[77023]: 2025-12-08 09:46:25.010363764 +0000 UTC m=+0.130911424 container init a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 08 09:46:25 compute-1 podman[77023]: 2025-12-08 09:46:25.020556007 +0000 UTC m=+0.141103667 container start a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_wiles, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 08 09:46:25 compute-1 podman[77023]: 2025-12-08 09:46:25.02450582 +0000 UTC m=+0.145053500 container attach a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_wiles, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 08 09:46:25 compute-1 zealous_wiles[77040]: 167 167
Dec 08 09:46:25 compute-1 systemd[1]: libpod-a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81.scope: Deactivated successfully.
Dec 08 09:46:25 compute-1 podman[77023]: 2025-12-08 09:46:25.026794906 +0000 UTC m=+0.147342526 container died a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_wiles, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 08 09:46:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-ade4fefee3b83b3fbc936f6836bf594e85c031f8290b2a5337195deae1232524-merged.mount: Deactivated successfully.
Dec 08 09:46:25 compute-1 podman[77023]: 2025-12-08 09:46:25.063063979 +0000 UTC m=+0.183611609 container remove a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 08 09:46:25 compute-1 systemd[1]: libpod-conmon-a879b651bcdebc8ef3b7371573cfaa813512a130d9c95d82f7cb6d533012ad81.scope: Deactivated successfully.
Dec 08 09:46:25 compute-1 podman[77070]: 2025-12-08 09:46:25.424771285 +0000 UTC m=+0.062248081 container create a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 08 09:46:25 compute-1 systemd[1]: Started libpod-conmon-a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a.scope.
Dec 08 09:46:25 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc437bace35130a942e879b364cc1f90a25048bd4bfe8b376dab284d28e56ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc437bace35130a942e879b364cc1f90a25048bd4bfe8b376dab284d28e56ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc437bace35130a942e879b364cc1f90a25048bd4bfe8b376dab284d28e56ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc437bace35130a942e879b364cc1f90a25048bd4bfe8b376dab284d28e56ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc437bace35130a942e879b364cc1f90a25048bd4bfe8b376dab284d28e56ce/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:25 compute-1 podman[77070]: 2025-12-08 09:46:25.407350384 +0000 UTC m=+0.044827210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:25 compute-1 podman[77070]: 2025-12-08 09:46:25.507477072 +0000 UTC m=+0.144953878 container init a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:25 compute-1 podman[77070]: 2025-12-08 09:46:25.514731 +0000 UTC m=+0.152207796 container start a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Dec 08 09:46:25 compute-1 podman[77070]: 2025-12-08 09:46:25.518483898 +0000 UTC m=+0.155960704 container attach a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 08 09:46:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test[77087]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 08 09:46:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test[77087]:                             [--no-systemd] [--no-tmpfs]
Dec 08 09:46:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test[77087]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 08 09:46:25 compute-1 systemd[1]: libpod-a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a.scope: Deactivated successfully.
Dec 08 09:46:25 compute-1 podman[77070]: 2025-12-08 09:46:25.689903915 +0000 UTC m=+0.327380721 container died a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 08 09:46:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-2cc437bace35130a942e879b364cc1f90a25048bd4bfe8b376dab284d28e56ce-merged.mount: Deactivated successfully.
Dec 08 09:46:25 compute-1 podman[77070]: 2025-12-08 09:46:25.72766098 +0000 UTC m=+0.365137766 container remove a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 08 09:46:25 compute-1 systemd[1]: libpod-conmon-a69be8fd96a38989e87ae76537e5461626416fd6778670d039157d50f59a8d6a.scope: Deactivated successfully.
Dec 08 09:46:25 compute-1 systemd[1]: Reloading.
Dec 08 09:46:25 compute-1 systemd-sysv-generator[77150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:26 compute-1 systemd-rc-local-generator[77142]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:26 compute-1 systemd[1]: Reloading.
Dec 08 09:46:26 compute-1 systemd-rc-local-generator[77191]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:26 compute-1 systemd-sysv-generator[77195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:26 compute-1 systemd[1]: Starting Ceph osd.0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:46:26 compute-1 podman[77250]: 2025-12-08 09:46:26.820118777 +0000 UTC m=+0.068582288 container create b974145d69849614f49bca0828d5b28c0808bbb1762436e9edef3a0536a71469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 08 09:46:26 compute-1 podman[77250]: 2025-12-08 09:46:26.790527447 +0000 UTC m=+0.038991008 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:26 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bfaf953c966aa6bfde82f424caa27050d12dddb866e1142b70e1f57cdc96a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bfaf953c966aa6bfde82f424caa27050d12dddb866e1142b70e1f57cdc96a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bfaf953c966aa6bfde82f424caa27050d12dddb866e1142b70e1f57cdc96a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bfaf953c966aa6bfde82f424caa27050d12dddb866e1142b70e1f57cdc96a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bfaf953c966aa6bfde82f424caa27050d12dddb866e1142b70e1f57cdc96a5/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:26 compute-1 podman[77250]: 2025-12-08 09:46:26.914598259 +0000 UTC m=+0.163061750 container init b974145d69849614f49bca0828d5b28c0808bbb1762436e9edef3a0536a71469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:26 compute-1 podman[77250]: 2025-12-08 09:46:26.922621397 +0000 UTC m=+0.171084878 container start b974145d69849614f49bca0828d5b28c0808bbb1762436e9edef3a0536a71469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 08 09:46:26 compute-1 podman[77250]: 2025-12-08 09:46:26.92696037 +0000 UTC m=+0.175423881 container attach b974145d69849614f49bca0828d5b28c0808bbb1762436e9edef3a0536a71469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 08 09:46:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:27 compute-1 bash[77250]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:27 compute-1 bash[77250]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:27 compute-1 lvm[77346]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 08 09:46:27 compute-1 lvm[77346]: VG ceph_vg0 finished
Dec 08 09:46:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 08 09:46:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:27 compute-1 bash[77250]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 08 09:46:27 compute-1 bash[77250]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:27 compute-1 bash[77250]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 08 09:46:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 08 09:46:27 compute-1 bash[77250]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 08 09:46:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 08 09:46:27 compute-1 bash[77250]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 08 09:46:28 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:28 compute-1 bash[77250]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:28 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:28 compute-1 bash[77250]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:28 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 08 09:46:28 compute-1 bash[77250]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 08 09:46:28 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 08 09:46:28 compute-1 bash[77250]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 08 09:46:28 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate[77265]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 08 09:46:28 compute-1 bash[77250]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 08 09:46:28 compute-1 systemd[1]: libpod-b974145d69849614f49bca0828d5b28c0808bbb1762436e9edef3a0536a71469.scope: Deactivated successfully.
Dec 08 09:46:28 compute-1 podman[77250]: 2025-12-08 09:46:28.223237819 +0000 UTC m=+1.471701300 container died b974145d69849614f49bca0828d5b28c0808bbb1762436e9edef3a0536a71469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 08 09:46:28 compute-1 systemd[1]: libpod-b974145d69849614f49bca0828d5b28c0808bbb1762436e9edef3a0536a71469.scope: Consumed 1.415s CPU time.
Dec 08 09:46:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-34bfaf953c966aa6bfde82f424caa27050d12dddb866e1142b70e1f57cdc96a5-merged.mount: Deactivated successfully.
Dec 08 09:46:28 compute-1 podman[77250]: 2025-12-08 09:46:28.271492569 +0000 UTC m=+1.519956050 container remove b974145d69849614f49bca0828d5b28c0808bbb1762436e9edef3a0536a71469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 08 09:46:28 compute-1 podman[77511]: 2025-12-08 09:46:28.484084464 +0000 UTC m=+0.044322179 container create ff49752169efb93a6e792f541cf5e74ddde52e19370a17c4422bf4bbea0dae2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Dec 08 09:46:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2e54ebb8cf37565145c95694b8225d85ba7438c3b9d6570f02f5b3ca7120a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2e54ebb8cf37565145c95694b8225d85ba7438c3b9d6570f02f5b3ca7120a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2e54ebb8cf37565145c95694b8225d85ba7438c3b9d6570f02f5b3ca7120a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2e54ebb8cf37565145c95694b8225d85ba7438c3b9d6570f02f5b3ca7120a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2e54ebb8cf37565145c95694b8225d85ba7438c3b9d6570f02f5b3ca7120a1/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:28 compute-1 podman[77511]: 2025-12-08 09:46:28.548442761 +0000 UTC m=+0.108680546 container init ff49752169efb93a6e792f541cf5e74ddde52e19370a17c4422bf4bbea0dae2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 08 09:46:28 compute-1 podman[77511]: 2025-12-08 09:46:28.461313948 +0000 UTC m=+0.021551693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:28 compute-1 podman[77511]: 2025-12-08 09:46:28.561152442 +0000 UTC m=+0.121390147 container start ff49752169efb93a6e792f541cf5e74ddde52e19370a17c4422bf4bbea0dae2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 08 09:46:28 compute-1 bash[77511]: ff49752169efb93a6e792f541cf5e74ddde52e19370a17c4422bf4bbea0dae2e
Dec 08 09:46:28 compute-1 systemd[1]: Started Ceph osd.0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:46:28 compute-1 ceph-osd[77531]: set uid:gid to 167:167 (ceph:ceph)
Dec 08 09:46:28 compute-1 ceph-osd[77531]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec 08 09:46:28 compute-1 ceph-osd[77531]: pidfile_write: ignore empty --pid-file
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:28 compute-1 sudo[76960]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:28 compute-1 sudo[77543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:46:28 compute-1 sudo[77543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:28 compute-1 sudo[77543]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:28 compute-1 sudo[77568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -- raw list --format json
Dec 08 09:46:28 compute-1 sudo[77568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:28 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:29 compute-1 podman[77635]: 2025-12-08 09:46:29.165659603 +0000 UTC m=+0.053241293 container create ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_margulis, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:29 compute-1 systemd[1]: Started libpod-conmon-ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1.scope.
Dec 08 09:46:29 compute-1 podman[77635]: 2025-12-08 09:46:29.13915979 +0000 UTC m=+0.026741500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:29 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:29 compute-1 podman[77635]: 2025-12-08 09:46:29.257365856 +0000 UTC m=+0.144947646 container init ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_margulis, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 08 09:46:29 compute-1 podman[77635]: 2025-12-08 09:46:29.269555642 +0000 UTC m=+0.157137372 container start ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_margulis, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:29 compute-1 podman[77635]: 2025-12-08 09:46:29.274363909 +0000 UTC m=+0.161945689 container attach ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 08 09:46:29 compute-1 quirky_margulis[77653]: 167 167
Dec 08 09:46:29 compute-1 systemd[1]: libpod-ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1.scope: Deactivated successfully.
Dec 08 09:46:29 compute-1 conmon[77653]: conmon ff246466294d20355e56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1.scope/container/memory.events
Dec 08 09:46:29 compute-1 podman[77635]: 2025-12-08 09:46:29.276971513 +0000 UTC m=+0.164553293 container died ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 08 09:46:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-67e3e57d49794fe268605cb7a268d38e035642db99cca8b6972a39bbf4674e60-merged.mount: Deactivated successfully.
Dec 08 09:46:29 compute-1 podman[77635]: 2025-12-08 09:46:29.319873391 +0000 UTC m=+0.207455091 container remove ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_margulis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 08 09:46:29 compute-1 systemd[1]: libpod-conmon-ff246466294d20355e56d51b79059b58a871863f5f680a34dc80341a69732ef1.scope: Deactivated successfully.
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:29 compute-1 podman[77679]: 2025-12-08 09:46:29.539274108 +0000 UTC m=+0.050747212 container create 5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 08 09:46:29 compute-1 systemd[1]: Started libpod-conmon-5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e.scope.
Dec 08 09:46:29 compute-1 podman[77679]: 2025-12-08 09:46:29.517130939 +0000 UTC m=+0.028604093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:29 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae211d1f7ed96af6759710de793633860ad8cf935dc2ce0f2bfc86e7d69b4a93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae211d1f7ed96af6759710de793633860ad8cf935dc2ce0f2bfc86e7d69b4a93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae211d1f7ed96af6759710de793633860ad8cf935dc2ce0f2bfc86e7d69b4a93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae211d1f7ed96af6759710de793633860ad8cf935dc2ce0f2bfc86e7d69b4a93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:29 compute-1 podman[77679]: 2025-12-08 09:46:29.652499472 +0000 UTC m=+0.163972596 container init 5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 08 09:46:29 compute-1 podman[77679]: 2025-12-08 09:46:29.662200778 +0000 UTC m=+0.173673892 container start 5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:46:29 compute-1 podman[77679]: 2025-12-08 09:46:29.665380888 +0000 UTC m=+0.176854002 container attach 5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:29 compute-1 ceph-osd[77531]: bdev(0x5640a24c5800 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:30 compute-1 ceph-osd[77531]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 08 09:46:30 compute-1 ceph-osd[77531]: load: jerasure load: lrc 
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:30 compute-1 lvm[77779]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 08 09:46:30 compute-1 lvm[77779]: VG ceph_vg0 finished
Dec 08 09:46:30 compute-1 strange_neumann[77695]: {}
Dec 08 09:46:30 compute-1 systemd[1]: libpod-5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e.scope: Deactivated successfully.
Dec 08 09:46:30 compute-1 systemd[1]: libpod-5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e.scope: Consumed 1.137s CPU time.
Dec 08 09:46:30 compute-1 podman[77679]: 2025-12-08 09:46:30.473954042 +0000 UTC m=+0.985427176 container died 5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:46:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-ae211d1f7ed96af6759710de793633860ad8cf935dc2ce0f2bfc86e7d69b4a93-merged.mount: Deactivated successfully.
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:30 compute-1 podman[77679]: 2025-12-08 09:46:30.532286218 +0000 UTC m=+1.043759332 container remove 5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 08 09:46:30 compute-1 systemd[1]: libpod-conmon-5f8bc39648a45d9be2aef69e4abcdad314c365451ef5bc24e70d534a9f391e2e.scope: Deactivated successfully.
Dec 08 09:46:30 compute-1 sudo[77568]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:30 compute-1 ceph-osd[77531]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 08 09:46:30 compute-1 ceph-osd[77531]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3360c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs mount
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs mount shared_bdev_used = 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: RocksDB version: 7.9.2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Git sha 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: DB SUMMARY
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: DB Session ID:  AZ48YSQ82983S74LVWV4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: CURRENT file:  CURRENT
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: IDENTITY file:  IDENTITY
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                         Options.error_if_exists: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.create_if_missing: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                         Options.paranoid_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                                     Options.env: 0x5640a3331dc0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                                Options.info_log: 0x5640a33357a0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_file_opening_threads: 16
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                              Options.statistics: (nil)
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.use_fsync: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.max_log_file_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                         Options.allow_fallocate: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.use_direct_reads: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.create_missing_column_families: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                              Options.db_log_dir: 
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                                 Options.wal_dir: db.wal
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.advise_random_on_open: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.write_buffer_manager: 0x5640a343ea00
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                            Options.rate_limiter: (nil)
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.unordered_write: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.row_cache: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                              Options.wal_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.allow_ingest_behind: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.two_write_queues: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.manual_wal_flush: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.wal_compression: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.atomic_flush: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.log_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.allow_data_in_errors: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.db_host_id: __hostname__
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.max_background_jobs: 4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.max_background_compactions: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.max_subcompactions: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.max_open_files: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.bytes_per_sync: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.max_background_flushes: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Compression algorithms supported:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         kZSTD supported: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         kXpressCompression supported: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         kBZip2Compression supported: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         kLZ4Compression supported: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         kZlibCompression supported: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         kLZ4HCCompression supported: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         kSnappyCompression supported: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 sudo[77812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 sudo[77812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 sudo[77812]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7a445375-c36b-49e9-a503-da07f6129514
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187190878003, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187190878184, "job": 1, "event": "recovery_finished"}
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 08 09:46:30 compute-1 ceph-osd[77531]: freelist init
Dec 08 09:46:30 compute-1 ceph-osd[77531]: freelist _read_cfg
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 08 09:46:30 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bluefs umount
Dec 08 09:46:30 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) close
Dec 08 09:46:31 compute-1 sudo[78028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:46:31 compute-1 sudo[78028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:31 compute-1 sudo[78028]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:31 compute-1 sudo[78053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 08 09:46:31 compute-1 sudo[78053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bdev(0x5640a3361000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs mount
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluefs mount shared_bdev_used = 4718592
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: RocksDB version: 7.9.2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Git sha 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: DB SUMMARY
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: DB Session ID:  AZ48YSQ82983S74LVWV5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: CURRENT file:  CURRENT
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: IDENTITY file:  IDENTITY
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                         Options.error_if_exists: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.create_if_missing: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                         Options.paranoid_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                                     Options.env: 0x5640a34e22a0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                                Options.info_log: 0x5640a37947c0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_file_opening_threads: 16
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                              Options.statistics: (nil)
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.use_fsync: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.max_log_file_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                         Options.allow_fallocate: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.use_direct_reads: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.create_missing_column_families: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                              Options.db_log_dir: 
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                                 Options.wal_dir: db.wal
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.advise_random_on_open: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.write_buffer_manager: 0x5640a343ec80
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                            Options.rate_limiter: (nil)
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.unordered_write: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.row_cache: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                              Options.wal_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.allow_ingest_behind: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.two_write_queues: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.manual_wal_flush: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.wal_compression: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.atomic_flush: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.log_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.allow_data_in_errors: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.db_host_id: __hostname__
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.max_background_jobs: 4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.max_background_compactions: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.max_subcompactions: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.max_open_files: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.bytes_per_sync: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.max_background_flushes: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Compression algorithms supported:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         kZSTD supported: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         kXpressCompression supported: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         kBZip2Compression supported: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         kLZ4Compression supported: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         kZlibCompression supported: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         kLZ4HCCompression supported: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         kSnappyCompression supported: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:           Options.merge_operator: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5640a3335ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5640a255a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.write_buffer_size: 16777216
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.max_write_buffer_number: 64
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.compression: LZ4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7a445375-c36b-49e9-a503-da07f6129514
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191145629, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191150149, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a445375-c36b-49e9-a503-da07f6129514", "db_session_id": "AZ48YSQ82983S74LVWV5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191153094, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a445375-c36b-49e9-a503-da07f6129514", "db_session_id": "AZ48YSQ82983S74LVWV5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191155413, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a445375-c36b-49e9-a503-da07f6129514", "db_session_id": "AZ48YSQ82983S74LVWV5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191156655, "job": 1, "event": "recovery_finished"}
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5640a3520000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: DB pointer 0x5640a34ee000
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 08 09:46:31 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 08 09:46:31 compute-1 ceph-osd[77531]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255a9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255a9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255a9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5640a255b350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 08 09:46:31 compute-1 ceph-osd[77531]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 08 09:46:31 compute-1 ceph-osd[77531]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 08 09:46:31 compute-1 ceph-osd[77531]: _get_class not permitted to load lua
Dec 08 09:46:31 compute-1 ceph-osd[77531]: _get_class not permitted to load sdk
Dec 08 09:46:31 compute-1 ceph-osd[77531]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 08 09:46:31 compute-1 ceph-osd[77531]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 08 09:46:31 compute-1 ceph-osd[77531]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 08 09:46:31 compute-1 ceph-osd[77531]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 08 09:46:31 compute-1 ceph-osd[77531]: osd.0 0 load_pgs
Dec 08 09:46:31 compute-1 ceph-osd[77531]: osd.0 0 load_pgs opened 0 pgs
Dec 08 09:46:31 compute-1 ceph-osd[77531]: osd.0 0 log_to_monitors true
Dec 08 09:46:31 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0[77527]: 2025-12-08T09:46:31.187+0000 7f9429ddb740 -1 osd.0 0 log_to_monitors true
Dec 08 09:46:31 compute-1 podman[78365]: 2025-12-08 09:46:31.587269107 +0000 UTC m=+0.065050308 container exec 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 08 09:46:31 compute-1 podman[78365]: 2025-12-08 09:46:31.707245423 +0000 UTC m=+0.185026584 container exec_died 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 08 09:46:31 compute-1 sudo[78053]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:31 compute-1 sudo[78417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:46:31 compute-1 sudo[78417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:31 compute-1 sudo[78417]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:31 compute-1 sudo[78442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -- inventory --format=json-pretty --filter-for-batch
Dec 08 09:46:31 compute-1 sudo[78442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:32 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 08 09:46:32 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 08 09:46:32 compute-1 podman[78508]: 2025-12-08 09:46:32.34367645 +0000 UTC m=+0.041458268 container create acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_roentgen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 08 09:46:32 compute-1 systemd[1]: Started libpod-conmon-acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9.scope.
Dec 08 09:46:32 compute-1 podman[78508]: 2025-12-08 09:46:32.324169666 +0000 UTC m=+0.021951534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:32 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:32 compute-1 podman[78508]: 2025-12-08 09:46:32.438459111 +0000 UTC m=+0.136240949 container init acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 08 09:46:32 compute-1 podman[78508]: 2025-12-08 09:46:32.450825272 +0000 UTC m=+0.148607080 container start acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:32 compute-1 podman[78508]: 2025-12-08 09:46:32.454614869 +0000 UTC m=+0.152396777 container attach acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_roentgen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 08 09:46:32 compute-1 amazing_roentgen[78525]: 167 167
Dec 08 09:46:32 compute-1 systemd[1]: libpod-acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9.scope: Deactivated successfully.
Dec 08 09:46:32 compute-1 conmon[78525]: conmon acbabde50cc764634edf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9.scope/container/memory.events
Dec 08 09:46:32 compute-1 podman[78508]: 2025-12-08 09:46:32.459250891 +0000 UTC m=+0.157032699 container died acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 08 09:46:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-fcab2373b1c0e2fe25fc1b2f052508def785fbc9745b50a103c447a82947b59e-merged.mount: Deactivated successfully.
Dec 08 09:46:32 compute-1 podman[78508]: 2025-12-08 09:46:32.50252929 +0000 UTC m=+0.200311098 container remove acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_roentgen, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:46:32 compute-1 systemd[1]: libpod-conmon-acbabde50cc764634edf6b97f9efd6b189400e9e3c2db1cc51909db2950321f9.scope: Deactivated successfully.
Dec 08 09:46:32 compute-1 podman[78547]: 2025-12-08 09:46:32.707171239 +0000 UTC m=+0.048670143 container create 59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 08 09:46:32 compute-1 systemd[1]: Started libpod-conmon-59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c.scope.
Dec 08 09:46:32 compute-1 podman[78547]: 2025-12-08 09:46:32.680103271 +0000 UTC m=+0.021602185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:32 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6019a1c26642faedb283349c2b8d670515d37d1a6b73d754fd9117400dd76c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6019a1c26642faedb283349c2b8d670515d37d1a6b73d754fd9117400dd76c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6019a1c26642faedb283349c2b8d670515d37d1a6b73d754fd9117400dd76c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6019a1c26642faedb283349c2b8d670515d37d1a6b73d754fd9117400dd76c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:32 compute-1 podman[78547]: 2025-12-08 09:46:32.838344873 +0000 UTC m=+0.179843807 container init 59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:46:32 compute-1 podman[78547]: 2025-12-08 09:46:32.852796463 +0000 UTC m=+0.194295367 container start 59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 08 09:46:32 compute-1 podman[78547]: 2025-12-08 09:46:32.856595081 +0000 UTC m=+0.198093995 container attach 59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:33 compute-1 ceph-osd[77531]: osd.0 0 done with init, starting boot process
Dec 08 09:46:33 compute-1 ceph-osd[77531]: osd.0 0 start_boot
Dec 08 09:46:33 compute-1 ceph-osd[77531]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 08 09:46:33 compute-1 ceph-osd[77531]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 08 09:46:33 compute-1 ceph-osd[77531]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 08 09:46:33 compute-1 ceph-osd[77531]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 08 09:46:33 compute-1 ceph-osd[77531]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 08 09:46:33 compute-1 exciting_shtern[78565]: [
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:     {
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "available": false,
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "being_replaced": false,
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "ceph_device_lvm": false,
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "lsm_data": {},
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "lvs": [],
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "path": "/dev/sr0",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "rejected_reasons": [
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "Insufficient space (<5GB)",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "Has a FileSystem"
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         ],
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         "sys_api": {
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "actuators": null,
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "device_nodes": [
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:                 "sr0"
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             ],
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "devname": "sr0",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "human_readable_size": "482.00 KB",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "id_bus": "ata",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "model": "QEMU DVD-ROM",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "nr_requests": "2",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "parent": "/dev/sr0",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "partitions": {},
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "path": "/dev/sr0",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "removable": "1",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "rev": "2.5+",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "ro": "0",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "rotational": "1",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "sas_address": "",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "sas_device_handle": "",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "scheduler_mode": "mq-deadline",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "sectors": 0,
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "sectorsize": "2048",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "size": 493568.0,
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "support_discard": "2048",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "type": "disk",
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:             "vendor": "QEMU"
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:         }
Dec 08 09:46:33 compute-1 exciting_shtern[78565]:     }
Dec 08 09:46:33 compute-1 exciting_shtern[78565]: ]
Dec 08 09:46:33 compute-1 systemd[1]: libpod-59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c.scope: Deactivated successfully.
Dec 08 09:46:33 compute-1 podman[78547]: 2025-12-08 09:46:33.510957726 +0000 UTC m=+0.852456630 container died 59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 08 09:46:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-be6019a1c26642faedb283349c2b8d670515d37d1a6b73d754fd9117400dd76c-merged.mount: Deactivated successfully.
Dec 08 09:46:33 compute-1 podman[78547]: 2025-12-08 09:46:33.616604745 +0000 UTC m=+0.958103669 container remove 59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 08 09:46:33 compute-1 systemd[1]: libpod-conmon-59a0921a1fbc6545dc47c738a1cd38f8d613125389efa4866a4ff0d69e1a6b6c.scope: Deactivated successfully.
Dec 08 09:46:33 compute-1 sudo[78442]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.097 iops: 7704.842 elapsed_sec: 0.389
Dec 08 09:46:36 compute-1 ceph-osd[77531]: log_channel(cluster) log [WRN] : OSD bench result of 7704.842246 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 0 waiting for initial osdmap
Dec 08 09:46:36 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0[77527]: 2025-12-08T09:46:36.641+0000 7f9425d5e640 -1 osd.0 0 waiting for initial osdmap
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Dec 08 09:46:36 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-0[77527]: 2025-12-08T09:46:36.660+0000 7f9421386640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 7 set_numa_affinity not setting numa affinity
Dec 08 09:46:36 compute-1 ceph-osd[77531]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 08 09:46:37 compute-1 ceph-osd[77531]: osd.0 8 state: booting -> active
Dec 08 09:46:38 compute-1 ceph-osd[77531]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 08 09:46:38 compute-1 ceph-osd[77531]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 08 09:46:38 compute-1 ceph-osd[77531]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 08 09:46:38 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 9 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:46:39 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=9/10 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:46:42 compute-1 sshd-session[79518]: Invalid user jacob from 79.32.212.213 port 51900
Dec 08 09:46:42 compute-1 sshd-session[79518]: Received disconnect from 79.32.212.213 port 51900:11: Bye Bye [preauth]
Dec 08 09:46:42 compute-1 sshd-session[79518]: Disconnected from invalid user jacob 79.32.212.213 port 51900 [preauth]
Dec 08 09:46:57 compute-1 sudo[79520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:46:57 compute-1 sudo[79520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:57 compute-1 sudo[79520]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:57 compute-1 sudo[79545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:57 compute-1 sudo[79545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:46:57 compute-1 podman[79611]: 2025-12-08 09:46:57.938012832 +0000 UTC m=+0.044027811 container create e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 08 09:46:57 compute-1 systemd[1]: Started libpod-conmon-e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7.scope.
Dec 08 09:46:58 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:58 compute-1 podman[79611]: 2025-12-08 09:46:57.915818432 +0000 UTC m=+0.021833451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:58 compute-1 podman[79611]: 2025-12-08 09:46:58.024796746 +0000 UTC m=+0.130811775 container init e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_blackwell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 08 09:46:58 compute-1 podman[79611]: 2025-12-08 09:46:58.035822249 +0000 UTC m=+0.141837238 container start e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 08 09:46:58 compute-1 podman[79611]: 2025-12-08 09:46:58.039666138 +0000 UTC m=+0.145681157 container attach e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:46:58 compute-1 clever_blackwell[79628]: 167 167
Dec 08 09:46:58 compute-1 systemd[1]: libpod-e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7.scope: Deactivated successfully.
Dec 08 09:46:58 compute-1 conmon[79628]: conmon e0256d7cce2118685fb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7.scope/container/memory.events
Dec 08 09:46:58 compute-1 podman[79611]: 2025-12-08 09:46:58.044105834 +0000 UTC m=+0.150120823 container died e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_blackwell, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:46:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-b57fe4bda39260fb9b3a54abadbd5e0a5e0c7d31061dd72ba48e200f12e0ed0f-merged.mount: Deactivated successfully.
Dec 08 09:46:58 compute-1 podman[79611]: 2025-12-08 09:46:58.087769642 +0000 UTC m=+0.193784611 container remove e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Dec 08 09:46:58 compute-1 systemd[1]: libpod-conmon-e0256d7cce2118685fb5d9d2b472373d903bd5759208454c69bebd1fc5774cf7.scope: Deactivated successfully.
Dec 08 09:46:58 compute-1 podman[79643]: 2025-12-08 09:46:58.162621387 +0000 UTC m=+0.051964496 container create 8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_einstein, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 08 09:46:58 compute-1 systemd[1]: Started libpod-conmon-8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d.scope.
Dec 08 09:46:58 compute-1 podman[79643]: 2025-12-08 09:46:58.133331536 +0000 UTC m=+0.022674685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:58 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:46:58 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebbad93194cc7fc7c22b00de825fe618bfac043afb3497ffc6fc4eb85147f2d/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:58 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebbad93194cc7fc7c22b00de825fe618bfac043afb3497ffc6fc4eb85147f2d/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:58 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebbad93194cc7fc7c22b00de825fe618bfac043afb3497ffc6fc4eb85147f2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:58 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebbad93194cc7fc7c22b00de825fe618bfac043afb3497ffc6fc4eb85147f2d/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:58 compute-1 podman[79643]: 2025-12-08 09:46:58.274693449 +0000 UTC m=+0.164036578 container init 8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:46:58 compute-1 podman[79643]: 2025-12-08 09:46:58.285694331 +0000 UTC m=+0.175037430 container start 8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:46:58 compute-1 podman[79643]: 2025-12-08 09:46:58.289146609 +0000 UTC m=+0.178489758 container attach 8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 08 09:46:58 compute-1 systemd[1]: libpod-8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d.scope: Deactivated successfully.
Dec 08 09:46:58 compute-1 podman[79643]: 2025-12-08 09:46:58.39307625 +0000 UTC m=+0.282419429 container died 8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_einstein, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 08 09:46:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-cebbad93194cc7fc7c22b00de825fe618bfac043afb3497ffc6fc4eb85147f2d-merged.mount: Deactivated successfully.
Dec 08 09:46:58 compute-1 podman[79643]: 2025-12-08 09:46:58.440157566 +0000 UTC m=+0.329500705 container remove 8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 08 09:46:58 compute-1 systemd[1]: libpod-conmon-8a12ab76d105205d794efaa59687271b86968df2febaddf32289cac53f9fff7d.scope: Deactivated successfully.
Dec 08 09:46:58 compute-1 systemd[1]: Reloading.
Dec 08 09:46:58 compute-1 systemd-rc-local-generator[79727]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:58 compute-1 systemd-sysv-generator[79731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:58 compute-1 systemd[1]: Reloading.
Dec 08 09:46:58 compute-1 systemd-rc-local-generator[79766]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:46:58 compute-1 systemd-sysv-generator[79771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:46:59 compute-1 systemd[1]: Starting Ceph mon.compute-1 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:46:59 compute-1 podman[79826]: 2025-12-08 09:46:59.377179626 +0000 UTC m=+0.052334266 container create 064bc633f5094f4cd12630a44489bf86c60a22470c165e28fd36e94bb6173e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:46:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42635e7211fe49de3fab89ae3f7c8ea9a99421df68ed9e38ab01cdd832202442/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42635e7211fe49de3fab89ae3f7c8ea9a99421df68ed9e38ab01cdd832202442/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42635e7211fe49de3fab89ae3f7c8ea9a99421df68ed9e38ab01cdd832202442/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42635e7211fe49de3fab89ae3f7c8ea9a99421df68ed9e38ab01cdd832202442/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Dec 08 09:46:59 compute-1 podman[79826]: 2025-12-08 09:46:59.352978739 +0000 UTC m=+0.028133429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:46:59 compute-1 podman[79826]: 2025-12-08 09:46:59.448736868 +0000 UTC m=+0.123891488 container init 064bc633f5094f4cd12630a44489bf86c60a22470c165e28fd36e94bb6173e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:46:59 compute-1 podman[79826]: 2025-12-08 09:46:59.46500328 +0000 UTC m=+0.140157900 container start 064bc633f5094f4cd12630a44489bf86c60a22470c165e28fd36e94bb6173e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 08 09:46:59 compute-1 bash[79826]: 064bc633f5094f4cd12630a44489bf86c60a22470c165e28fd36e94bb6173e85
Dec 08 09:46:59 compute-1 systemd[1]: Started Ceph mon.compute-1 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:46:59 compute-1 ceph-mon[79846]: set uid:gid to 167:167 (ceph:ceph)
Dec 08 09:46:59 compute-1 ceph-mon[79846]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pidfile_write: ignore empty --pid-file
Dec 08 09:46:59 compute-1 ceph-mon[79846]: load: jerasure load: lrc 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: RocksDB version: 7.9.2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Git sha 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: DB SUMMARY
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: DB Session ID:  N6WMD309NYTLX53YA9N0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: CURRENT file:  CURRENT
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: IDENTITY file:  IDENTITY
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                         Options.error_if_exists: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                       Options.create_if_missing: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                         Options.paranoid_checks: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                                     Options.env: 0x55b751fc6c20
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                                Options.info_log: 0x55b752c16e40
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.max_file_opening_threads: 16
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                              Options.statistics: (nil)
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                               Options.use_fsync: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                       Options.max_log_file_size: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                         Options.allow_fallocate: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                        Options.use_direct_reads: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:          Options.create_missing_column_families: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                              Options.db_log_dir: 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                                 Options.wal_dir: 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                   Options.advise_random_on_open: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                    Options.write_buffer_manager: 0x55b752c1b900
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                            Options.rate_limiter: (nil)
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.unordered_write: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                               Options.row_cache: None
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                              Options.wal_filter: None
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.allow_ingest_behind: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.two_write_queues: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.manual_wal_flush: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.wal_compression: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.atomic_flush: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                 Options.log_readahead_size: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.allow_data_in_errors: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.db_host_id: __hostname__
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.max_background_jobs: 2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.max_background_compactions: -1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.max_subcompactions: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.max_total_wal_size: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                          Options.max_open_files: -1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                          Options.bytes_per_sync: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:       Options.compaction_readahead_size: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.max_background_flushes: -1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Compression algorithms supported:
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         kZSTD supported: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         kXpressCompression supported: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         kBZip2Compression supported: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         kLZ4Compression supported: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         kZlibCompression supported: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         kLZ4HCCompression supported: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         kSnappyCompression supported: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:           Options.merge_operator: 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:        Options.compaction_filter: None
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:        Options.compaction_filter_factory: None
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:  Options.sst_partitioner_factory: None
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b752c16700)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b752c3b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:        Options.write_buffer_size: 33554432
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:  Options.max_write_buffer_number: 2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:          Options.compression: NoCompression
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:       Options.prefix_extractor: nullptr
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.num_levels: 7
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.compression_opts.level: 32767
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:               Options.compression_opts.strategy: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                  Options.compression_opts.enabled: false
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                        Options.arena_block_size: 1048576
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.disable_auto_compactions: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                   Options.inplace_update_support: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                           Options.bloom_locality: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                    Options.max_successive_merges: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.paranoid_file_checks: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.force_consistency_checks: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.report_bg_io_stats: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                               Options.ttl: 2592000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                       Options.enable_blob_files: false
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                           Options.min_blob_size: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                          Options.blob_file_size: 268435456
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb:                Options.blob_file_starting_level: 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 80a3bbeb-1e70-44e2-b668-bf1fa77bc39c
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187219520897, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187219523129, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187219, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80a3bbeb-1e70-44e2-b668-bf1fa77bc39c", "db_session_id": "N6WMD309NYTLX53YA9N0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187219523272, "job": 1, "event": "recovery_finished"}
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 08 09:46:59 compute-1 sudo[79545]: pam_unix(sudo:session): session closed for user root
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b752c3ce00
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: DB pointer 0x55b752d46000
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 08 09:46:59 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b752c3b350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 08 09:46:59 compute-1 ceph-mon[79846]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Dec 08 09:46:59 compute-1 ceph-mon[79846]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(???) e0 preinit fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).mds e1 new map
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           btime 2025-12-08T09:44:57:301434+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 2 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.conf
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Deploying daemon crash.compute-1 on compute-1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='client.? 192.168.122.101:0/748161812' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c550a2b3-dc83-454a-a82b-745064d6ae84"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='client.? 192.168.122.101:0/748161812' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c550a2b3-dc83-454a-a82b-745064d6ae84"}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osdmap e4: 1 total, 0 up, 1 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/604360874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "10863df8-16d4-4896-ae26-227efb76290e"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/604360874' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "10863df8-16d4-4896-ae26-227efb76290e"}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osdmap e5: 2 total, 0 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='client.? 192.168.122.101:0/1327518535' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1141389350' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Deploying daemon osd.1 on compute-0
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Deploying daemon osd.0 on compute-1
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2234269109' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osdmap e6: 2 total, 0 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osdmap e7: 2 total, 0 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Adjusting osd_memory_target on compute-1 to  5248M
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: purged_snaps scrub starts
Dec 08 09:46:59 compute-1 ceph-mon[79846]: purged_snaps scrub ok
Dec 08 09:46:59 compute-1 ceph-mon[79846]: purged_snaps scrub starts
Dec 08 09:46:59 compute-1 ceph-mon[79846]: purged_snaps scrub ok
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: OSD bench result of 7704.842246 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: OSD bench result of 7640.665901 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488] boot
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105] boot
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osdmap e8: 2 total, 2 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osdmap e9: 2 total, 2 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osdmap e10: 2 total, 2 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: osdmap e11: 2 total, 2 up, 2 in
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mgrmap e9: compute-0.kitiwu(active, since 82s)
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v46: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.conf
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:46:59 compute-1 ceph-mon[79846]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Deploying daemon mon.compute-2 on compute-2
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 08 09:46:59 compute-1 ceph-mon[79846]: Cluster is now healthy
Dec 08 09:46:59 compute-1 ceph-mon[79846]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Dec 08 09:47:05 compute-1 ceph-mon[79846]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Dec 08 09:47:05 compute-1 ceph-mon[79846]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 08 09:47:05 compute-1 ceph-mon[79846]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 08 09:47:05 compute-1 ceph-mon[79846]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 08 09:47:08 compute-1 ceph-mon[79846]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec 08 09:47:09 compute-1 ceph-mon[79846]: Deploying daemon mon.compute-1 on compute-1
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-0 calling monitor election
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-2 calling monitor election
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 08 09:47:09 compute-1 ceph-mon[79846]: monmap epoch 2
Dec 08 09:47:09 compute-1 ceph-mon[79846]: fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:09 compute-1 ceph-mon[79846]: last_changed 2025-12-08T09:46:57.351280+0000
Dec 08 09:47:09 compute-1 ceph-mon[79846]: created 2025-12-08T09:44:55.163607+0000
Dec 08 09:47:09 compute-1 ceph-mon[79846]: min_mon_release 19 (squid)
Dec 08 09:47:09 compute-1 ceph-mon[79846]: election_strategy: 1
Dec 08 09:47:09 compute-1 ceph-mon[79846]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 08 09:47:09 compute-1 ceph-mon[79846]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 08 09:47:09 compute-1 ceph-mon[79846]: fsmap 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: osdmap e11: 2 total, 2 up, 2 in
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mgrmap e9: compute-0.kitiwu(active, since 104s)
Dec 08 09:47:09 compute-1 ceph-mon[79846]: overall HEALTH_OK
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: Deploying daemon mgr.compute-2.zqytsv on compute-2
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-0 calling monitor election
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-2 calling monitor election
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2567547284' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-1 calling monitor election
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 08 09:47:09 compute-1 ceph-mon[79846]: monmap epoch 3
Dec 08 09:47:09 compute-1 ceph-mon[79846]: fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:09 compute-1 ceph-mon[79846]: last_changed 2025-12-08T09:47:03.886776+0000
Dec 08 09:47:09 compute-1 ceph-mon[79846]: created 2025-12-08T09:44:55.163607+0000
Dec 08 09:47:09 compute-1 ceph-mon[79846]: min_mon_release 19 (squid)
Dec 08 09:47:09 compute-1 ceph-mon[79846]: election_strategy: 1
Dec 08 09:47:09 compute-1 ceph-mon[79846]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 08 09:47:09 compute-1 ceph-mon[79846]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 08 09:47:09 compute-1 ceph-mon[79846]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 08 09:47:09 compute-1 ceph-mon[79846]: fsmap 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: osdmap e11: 2 total, 2 up, 2 in
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mgrmap e9: compute-0.kitiwu(active, since 111s)
Dec 08 09:47:09 compute-1 ceph-mon[79846]: overall HEALTH_OK
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:09 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.mmkaif", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 08 09:47:09 compute-1 sudo[79885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:47:09 compute-1 sudo[79885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:09 compute-1 sudo[79885]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:09 compute-1 sudo[79910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:09 compute-1 sudo[79910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:09 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e11 _set_new_cache_sizes cache_size:1019935493 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:09 compute-1 podman[79976]: 2025-12-08 09:47:09.752228292 +0000 UTC m=+0.050857784 container create aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_poitras, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 08 09:47:09 compute-1 systemd[1]: Started libpod-conmon-aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a.scope.
Dec 08 09:47:09 compute-1 podman[79976]: 2025-12-08 09:47:09.729890798 +0000 UTC m=+0.028520280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:47:09 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:47:09 compute-1 podman[79976]: 2025-12-08 09:47:09.875731959 +0000 UTC m=+0.174361501 container init aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_poitras, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 08 09:47:09 compute-1 podman[79976]: 2025-12-08 09:47:09.886823923 +0000 UTC m=+0.185453435 container start aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 08 09:47:09 compute-1 podman[79976]: 2025-12-08 09:47:09.891318301 +0000 UTC m=+0.189947803 container attach aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:47:09 compute-1 blissful_poitras[79992]: 167 167
Dec 08 09:47:09 compute-1 systemd[1]: libpod-aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a.scope: Deactivated successfully.
Dec 08 09:47:09 compute-1 podman[79976]: 2025-12-08 09:47:09.897751454 +0000 UTC m=+0.196380956 container died aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 08 09:47:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-2fb4eb488281b95ce81e6d043f94a7e0fcd926146b1115ee6c22f7e0822d2d6a-merged.mount: Deactivated successfully.
Dec 08 09:47:09 compute-1 podman[79976]: 2025-12-08 09:47:09.952869578 +0000 UTC m=+0.251499070 container remove aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 08 09:47:09 compute-1 systemd[1]: libpod-conmon-aa09fb90d14d3670f45a0f05f30c803a041ddb370e9130372a971c9fc42d161a.scope: Deactivated successfully.
Dec 08 09:47:10 compute-1 systemd[1]: Reloading.
Dec 08 09:47:10 compute-1 ceph-mon[79846]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:47:10 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.mmkaif", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 08 09:47:10 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 08 09:47:10 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:10 compute-1 ceph-mon[79846]: Deploying daemon mgr.compute-1.mmkaif on compute-1
Dec 08 09:47:10 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4242671449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 08 09:47:10 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:10 compute-1 systemd-sysv-generator[80035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:47:10 compute-1 systemd-rc-local-generator[80029]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:47:10 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e12 e12: 2 total, 2 up, 2 in
Dec 08 09:47:10 compute-1 systemd[1]: Reloading.
Dec 08 09:47:10 compute-1 systemd-rc-local-generator[80075]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:47:10 compute-1 systemd-sysv-generator[80079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:47:10 compute-1 systemd[1]: Starting Ceph mgr.compute-1.mmkaif for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:47:10 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:10 compute-1 podman[80133]: 2025-12-08 09:47:10.895644742 +0000 UTC m=+0.062881286 container create 9f365c7893a6be75664da1f049796f6f5a45360f8461ccfc55c0cc7542ac0168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:47:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455157c7a477a0626bb5492263254d90da58718e9a001f437a318b5c6ef42efb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:47:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455157c7a477a0626bb5492263254d90da58718e9a001f437a318b5c6ef42efb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:47:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455157c7a477a0626bb5492263254d90da58718e9a001f437a318b5c6ef42efb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:47:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455157c7a477a0626bb5492263254d90da58718e9a001f437a318b5c6ef42efb/merged/var/lib/ceph/mgr/ceph-compute-1.mmkaif supports timestamps until 2038 (0x7fffffff)
Dec 08 09:47:10 compute-1 podman[80133]: 2025-12-08 09:47:10.959573797 +0000 UTC m=+0.126810391 container init 9f365c7893a6be75664da1f049796f6f5a45360f8461ccfc55c0cc7542ac0168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 08 09:47:10 compute-1 podman[80133]: 2025-12-08 09:47:10.873514034 +0000 UTC m=+0.040750568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:47:10 compute-1 podman[80133]: 2025-12-08 09:47:10.969105017 +0000 UTC m=+0.136341561 container start 9f365c7893a6be75664da1f049796f6f5a45360f8461ccfc55c0cc7542ac0168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec 08 09:47:10 compute-1 bash[80133]: 9f365c7893a6be75664da1f049796f6f5a45360f8461ccfc55c0cc7542ac0168
Dec 08 09:47:10 compute-1 systemd[1]: Started Ceph mgr.compute-1.mmkaif for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: set uid:gid to 167:167 (ceph:ceph)
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: pidfile_write: ignore empty --pid-file
Dec 08 09:47:11 compute-1 sudo[79910]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'alerts'
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'balancer'
Dec 08 09:47:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:11.162+0000 7f774e170140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:47:11 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e13 e13: 2 total, 2 up, 2 in
Dec 08 09:47:11 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4242671449' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 08 09:47:11 compute-1 ceph-mon[79846]: osdmap e12: 2 total, 2 up, 2 in
Dec 08 09:47:11 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4229412466' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 08 09:47:11 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'cephadm'
Dec 08 09:47:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:11.238+0000 7f774e170140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:47:11 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:11 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'crash'
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'dashboard'
Dec 08 09:47:12 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:12.066+0000 7f774e170140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:47:12 compute-1 ceph-mon[79846]: pgmap v62: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:47:12 compute-1 ceph-mon[79846]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 08 09:47:12 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:12 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4229412466' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 08 09:47:12 compute-1 ceph-mon[79846]: osdmap e13: 2 total, 2 up, 2 in
Dec 08 09:47:12 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:12 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:12 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 08 09:47:12 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 08 09:47:12 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:12 compute-1 ceph-mon[79846]: Deploying daemon crash.compute-2 on compute-2
Dec 08 09:47:12 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e14 e14: 2 total, 2 up, 2 in
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'devicehealth'
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'diskprediction_local'
Dec 08 09:47:12 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:12.710+0000 7f774e170140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:47:12 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 08 09:47:12 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 08 09:47:12 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]:   from numpy import show_config as show_numpy_config
Dec 08 09:47:12 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:12.882+0000 7f774e170140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'influx'
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:47:12 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'insights'
Dec 08 09:47:12 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:12.954+0000 7f774e170140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:47:13 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'iostat'
Dec 08 09:47:13 compute-1 ceph-mgr[80153]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:47:13 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'k8sevents'
Dec 08 09:47:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:13.093+0000 7f774e170140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:47:13 compute-1 ceph-mon[79846]: pgmap v64: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 08 09:47:13 compute-1 ceph-mon[79846]: osdmap e14: 2 total, 2 up, 2 in
Dec 08 09:47:13 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:13 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2699660867' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 08 09:47:13 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'localpool'
Dec 08 09:47:13 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e15 e15: 2 total, 2 up, 2 in
Dec 08 09:47:13 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mds_autoscaler'
Dec 08 09:47:13 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mirroring'
Dec 08 09:47:13 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'nfs'
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'orchestrator'
Dec 08 09:47:14 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:14.119+0000 7f774e170140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_perf_query'
Dec 08 09:47:14 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:14.342+0000 7f774e170140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_support'
Dec 08 09:47:14 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:14.418+0000 7f774e170140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'pg_autoscaler'
Dec 08 09:47:14 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:14.482+0000 7f774e170140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2699660867' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 08 09:47:14 compute-1 ceph-mon[79846]: osdmap e15: 2 total, 2 up, 2 in
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:14 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2698027637' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 08 09:47:14 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Dec 08 09:47:14 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e16 _set_new_cache_sizes cache_size:1020053302 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:14.570+0000 7f774e170140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'progress'
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'prometheus'
Dec 08 09:47:14 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:14.636+0000 7f774e170140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:47:14 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rbd_support'
Dec 08 09:47:14 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:14.972+0000 7f774e170140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:47:15 compute-1 ceph-mgr[80153]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:47:15 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'restful'
Dec 08 09:47:15 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:15.073+0000 7f774e170140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:47:15 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rgw'
Dec 08 09:47:15 compute-1 ceph-mgr[80153]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:47:15 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rook'
Dec 08 09:47:15 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:15.534+0000 7f774e170140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:47:15 compute-1 ceph-mon[79846]: pgmap v67: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:15 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2698027637' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 08 09:47:15 compute-1 ceph-mon[79846]: osdmap e16: 2 total, 2 up, 2 in
Dec 08 09:47:15 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/64121159' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 08 09:47:15 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Dec 08 09:47:15 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e18 e18: 3 total, 2 up, 3 in
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'selftest'
Dec 08 09:47:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:16.126+0000 7f774e170140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'snap_schedule'
Dec 08 09:47:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:16.207+0000 7f774e170140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'stats'
Dec 08 09:47:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:16.292+0000 7f774e170140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'status'
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:16.441+0000 7f774e170140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telegraf'
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telemetry'
Dec 08 09:47:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:16.510+0000 7f774e170140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/64121159' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 08 09:47:16 compute-1 ceph-mon[79846]: osdmap e17: 2 total, 2 up, 2 in
Dec 08 09:47:16 compute-1 ceph-mon[79846]: from='client.? 192.168.122.102:0/2956902591' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"}]: dispatch
Dec 08 09:47:16 compute-1 ceph-mon[79846]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"}]: dispatch
Dec 08 09:47:16 compute-1 ceph-mon[79846]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"}]': finished
Dec 08 09:47:16 compute-1 ceph-mon[79846]: osdmap e18: 3 total, 2 up, 3 in
Dec 08 09:47:16 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:16 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv started
Dec 08 09:47:16 compute-1 ceph-mon[79846]: from='client.? 192.168.122.102:0/2178226314' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 08 09:47:16 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3965186026' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 08 09:47:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:16.667+0000 7f774e170140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'test_orchestrator'
Dec 08 09:47:16 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e19 e19: 3 total, 2 up, 3 in
Dec 08 09:47:16 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 19 pg[7.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:47:16 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'volumes'
Dec 08 09:47:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:16.892+0000 7f774e170140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:47:17 compute-1 ceph-mgr[80153]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:47:17 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'zabbix'
Dec 08 09:47:17 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:17.152+0000 7f774e170140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:47:17 compute-1 ceph-mgr[80153]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:47:17 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:17.221+0000 7f774e170140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:47:17 compute-1 ceph-mgr[80153]: ms_deliver_dispatch: unhandled message 0x559e34cf0d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 08 09:47:17 compute-1 ceph-mon[79846]: pgmap v71: 6 pgs: 3 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:17 compute-1 ceph-mon[79846]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 08 09:47:17 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3965186026' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 08 09:47:17 compute-1 ceph-mon[79846]: osdmap e19: 3 total, 2 up, 3 in
Dec 08 09:47:17 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:17 compute-1 ceph-mon[79846]: mgrmap e10: compute-0.kitiwu(active, since 119s), standbys: compute-2.zqytsv
Dec 08 09:47:17 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"}]: dispatch
Dec 08 09:47:17 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif started
Dec 08 09:47:18 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Dec 08 09:47:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 20 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1188324566' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1188324566' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 08 09:47:19 compute-1 ceph-mon[79846]: mgrmap e11: compute-0.kitiwu(active, since 2m), standbys: compute-2.zqytsv, compute-1.mmkaif
Dec 08 09:47:19 compute-1 ceph-mon[79846]: osdmap e20: 3 total, 2 up, 3 in
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"}]: dispatch
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:47:19 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:19 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Dec 08 09:47:19 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 21 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=21 pruub=8.224267960s) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active pruub 56.242000580s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:19 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 21 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=21 pruub=8.224267960s) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown pruub 56.242000580s@ mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:19 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e21 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:20 compute-1 ceph-mon[79846]: pgmap v74: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:20 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:20 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:20 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:47:20 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:47:20 compute-1 ceph-mon[79846]: osdmap e21: 3 total, 2 up, 3 in
Dec 08 09:47:20 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:20 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:47:20 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4067368477' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 08 09:47:20 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1e( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1f( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1d( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1c( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1b( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.a( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.9( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.8( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.7( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.6( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.4( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.5( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.c( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.b( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.3( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.2( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.d( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.e( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.10( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.f( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.11( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.12( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.13( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.14( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.15( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.16( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.18( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.17( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.19( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1a( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.8( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.7( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.0( empty local-lis/les=21/22 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.2( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.3( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.11( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.16( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.14( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.1a( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.17( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 22 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:20 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Dec 08 09:47:20 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Dec 08 09:47:21 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Dec 08 09:47:21 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:47:21 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4067368477' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 08 09:47:21 compute-1 ceph-mon[79846]: osdmap e22: 3 total, 2 up, 3 in
Dec 08 09:47:21 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:21 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:47:21 compute-1 ceph-mon[79846]: pgmap v77: 38 pgs: 31 unknown, 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:21 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:21 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:21 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/962673436' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 08 09:47:21 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Dec 08 09:47:21 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Dec 08 09:47:22 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Dec 08 09:47:22 compute-1 ceph-mon[79846]: 2.1f deep-scrub starts
Dec 08 09:47:22 compute-1 ceph-mon[79846]: 2.1f deep-scrub ok
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/962673436' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 08 09:47:22 compute-1 ceph-mon[79846]: osdmap e23: 3 total, 2 up, 3 in
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:47:22 compute-1 ceph-mon[79846]: 2.1b deep-scrub starts
Dec 08 09:47:22 compute-1 ceph-mon[79846]: 2.1b deep-scrub ok
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:22 compute-1 ceph-mon[79846]: Deploying daemon osd.2 on compute-2
Dec 08 09:47:22 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2842985013' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 08 09:47:22 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 08 09:47:22 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 08 09:47:23 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Dec 08 09:47:23 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:47:23 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2842985013' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 08 09:47:23 compute-1 ceph-mon[79846]: osdmap e24: 3 total, 2 up, 3 in
Dec 08 09:47:23 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:23 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:47:23 compute-1 ceph-mon[79846]: pgmap v80: 100 pgs: 93 unknown, 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:23 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:23 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:23 compute-1 ceph-mon[79846]: 2.8 scrub starts
Dec 08 09:47:23 compute-1 ceph-mon[79846]: 2.8 scrub ok
Dec 08 09:47:23 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1163642721' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 08 09:47:23 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.9 deep-scrub starts
Dec 08 09:47:23 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.9 deep-scrub ok
Dec 08 09:47:24 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:24 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 08 09:47:24 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Dec 08 09:47:24 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 08 09:47:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 26 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26 pruub=9.626582146s) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active pruub 63.028339386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 26 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26 pruub=9.626582146s) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown pruub 63.028339386s@ mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:24 compute-1 ceph-mon[79846]: 4.1e scrub starts
Dec 08 09:47:24 compute-1 ceph-mon[79846]: 4.1e scrub ok
Dec 08 09:47:24 compute-1 ceph-mon[79846]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 08 09:47:24 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:47:24 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:47:24 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:47:24 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1163642721' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 08 09:47:24 compute-1 ceph-mon[79846]: osdmap e25: 3 total, 2 up, 3 in
Dec 08 09:47:24 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:24 compute-1 ceph-mon[79846]: 2.9 deep-scrub starts
Dec 08 09:47:24 compute-1 ceph-mon[79846]: 2.9 deep-scrub ok
Dec 08 09:47:24 compute-1 ceph-mon[79846]: 3.17 scrub starts
Dec 08 09:47:24 compute-1 ceph-mon[79846]: 3.17 scrub ok
Dec 08 09:47:24 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:25 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1f( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1c( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1d( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.13( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.12( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.10( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.11( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.16( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.17( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.14( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.15( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.a( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.b( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.8( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.9( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.e( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.6( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.5( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.4( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.7( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.3( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.2( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.c( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1e( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.f( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.d( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.19( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.18( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1b( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1a( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.12( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.16( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.17( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.15( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.6( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.7( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.3( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.19( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.c( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.1a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 27 pg[7.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [0] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:25 compute-1 ceph-mon[79846]: pgmap v82: 162 pgs: 62 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:25 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:47:25 compute-1 ceph-mon[79846]: 2.1c scrub starts
Dec 08 09:47:25 compute-1 ceph-mon[79846]: osdmap e26: 3 total, 2 up, 3 in
Dec 08 09:47:25 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:25 compute-1 ceph-mon[79846]: 2.1c scrub ok
Dec 08 09:47:25 compute-1 ceph-mon[79846]: 4.1f scrub starts
Dec 08 09:47:25 compute-1 ceph-mon[79846]: 4.1f scrub ok
Dec 08 09:47:25 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2436376386' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 08 09:47:25 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2436376386' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 08 09:47:25 compute-1 ceph-mon[79846]: osdmap e27: 3 total, 2 up, 3 in
Dec 08 09:47:25 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:25 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 08 09:47:25 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 08 09:47:26 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 08 09:47:26 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 08 09:47:26 compute-1 ceph-mon[79846]: 2.1d scrub starts
Dec 08 09:47:26 compute-1 ceph-mon[79846]: 3.16 deep-scrub starts
Dec 08 09:47:26 compute-1 ceph-mon[79846]: 2.1d scrub ok
Dec 08 09:47:26 compute-1 ceph-mon[79846]: 3.16 deep-scrub ok
Dec 08 09:47:27 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Dec 08 09:47:27 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Dec 08 09:47:27 compute-1 ceph-mon[79846]: pgmap v85: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:27 compute-1 ceph-mon[79846]: 3.15 scrub starts
Dec 08 09:47:27 compute-1 ceph-mon[79846]: 2.1e scrub starts
Dec 08 09:47:27 compute-1 ceph-mon[79846]: 2.1e scrub ok
Dec 08 09:47:27 compute-1 ceph-mon[79846]: 3.15 scrub ok
Dec 08 09:47:27 compute-1 ceph-mon[79846]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 08 09:47:27 compute-1 ceph-mon[79846]: Cluster is now healthy
Dec 08 09:47:27 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:27 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:28 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 08 09:47:28 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 08 09:47:29 compute-1 ceph-mon[79846]: 3.18 scrub starts
Dec 08 09:47:29 compute-1 ceph-mon[79846]: 3.18 scrub ok
Dec 08 09:47:29 compute-1 ceph-mon[79846]: 2.1 deep-scrub starts
Dec 08 09:47:29 compute-1 ceph-mon[79846]: 2.1 deep-scrub ok
Dec 08 09:47:29 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:29 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:29 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:29 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:29 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:29 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:29 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:47:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.18( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.18( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.1a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.1b( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.19( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.1a( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.1b( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.1a( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.1c( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.c( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.e( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.e( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.d( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.e( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.f( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.3( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.1( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.1( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.2( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.5( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.2( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.7( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.5( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.7( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.4( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.d( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.a( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.8( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.9( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.1e( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414810181s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412445068s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.036453247s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.034095764s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.10( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414781570s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412445068s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.036427498s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.034095764s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.1d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414770126s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412483215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.036355019s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.034111023s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.9( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414527893s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412399292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.036278725s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.034111023s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.1d( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414499283s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412483215s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.035361290s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.033462524s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.035335541s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033462524s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.13( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414486885s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412399292s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414346695s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412651062s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.035016060s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.033332825s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.035167694s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.033500671s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.14( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414325714s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412651062s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.035149574s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033500671s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.034976959s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033332825s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414361000s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412834167s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414341927s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412834167s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414267540s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412788391s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.034791946s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.033332825s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.8( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414250374s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412788391s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.034825325s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.033508301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.034808159s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033508301s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413954735s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412689209s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.a( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413926125s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412689209s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413929939s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412765503s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413916588s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412765503s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414158821s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.413017273s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.6( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413843155s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412796021s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.9( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.414120674s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.413017273s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.6( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413830757s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412796021s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413100243s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412879944s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.4( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413083076s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412879944s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.033082962s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.032966614s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.3( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413047791s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.412986755s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.033060074s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032966614s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.3( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.413033485s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.412986755s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.033237457s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.033195496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.033206940s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033195496s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.032728195s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.032844543s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412894249s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.413009644s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.032711983s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032844543s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.2( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412878990s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.413009644s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.1e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412779808s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.413047791s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.032934189s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.033195496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412825584s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.413093567s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.1e( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412755966s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.413047791s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.032901764s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033195496s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412780762s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.413093567s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.032420158s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.032829285s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.032403946s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032829285s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412780762s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.413345337s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.034759521s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033332825s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412251472s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active pruub 70.413162231s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.027493477s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.028442383s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.1b( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412232399s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.413162231s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.031963348s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.032936096s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.027473450s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.028442383s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.031930923s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032936096s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.15( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[7.18( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28 pruub=12.412745476s) [1] r=-1 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.413345337s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.8( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.031629562s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 73.032974243s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=15.031567574s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032974243s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.16( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.15( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.17( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.13( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.10( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.15( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.12( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[4.1f( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.11( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[6.1c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=0/0 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 28 pg[5.1f( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:29 compute-1 sudo[80185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 08 09:47:29 compute-1 sudo[80185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:29 compute-1 sudo[80185]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:29 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 08 09:47:29 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 08 09:47:29 compute-1 sudo[80210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:47:29 compute-1 sudo[80210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:29 compute-1 sudo[80210]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:29 compute-1 sudo[80235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 08 09:47:29 compute-1 sudo[80235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:30 compute-1 ceph-mon[79846]: pgmap v86: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:30 compute-1 ceph-mon[79846]: 4.12 scrub starts
Dec 08 09:47:30 compute-1 ceph-mon[79846]: 4.12 scrub ok
Dec 08 09:47:30 compute-1 ceph-mon[79846]: 2.7 scrub starts
Dec 08 09:47:30 compute-1 ceph-mon[79846]: 2.7 scrub ok
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:47:30 compute-1 ceph-mon[79846]: osdmap e28: 3 total, 2 up, 3 in
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2897888694' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2897888694' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='osd.2 [v2:192.168.122.102:6800/2213880029,v1:192.168.122.102:6801/2213880029]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 08 09:47:30 compute-1 ceph-mon[79846]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 08 09:47:30 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.1e( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.1c( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.1f( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.10( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.11( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.17( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.15( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.12( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.16( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.9( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.15( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.a( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.8( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.7( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.4( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.2( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.7( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.1( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.2( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.5( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.f( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.3( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.e( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.e( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.1c( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.1a( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.1b( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.19( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.d( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[6.1a( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28) [0] r=0 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 29 pg[5.18( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28) [0] r=0 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:30 compute-1 podman[80333]: 2025-12-08 09:47:30.568769582 +0000 UTC m=+0.069311739 container exec 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 08 09:47:30 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 08 09:47:30 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 08 09:47:30 compute-1 podman[80333]: 2025-12-08 09:47:30.691366272 +0000 UTC m=+0.191908349 container exec_died 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 08 09:47:31 compute-1 sudo[80235]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:31 compute-1 sudo[80416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:47:31 compute-1 sudo[80416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:31 compute-1 sudo[80416]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:31 compute-1 ceph-mon[79846]: 2.1a scrub starts
Dec 08 09:47:31 compute-1 ceph-mon[79846]: 2.1a scrub ok
Dec 08 09:47:31 compute-1 ceph-mon[79846]: 3.1f scrub starts
Dec 08 09:47:31 compute-1 ceph-mon[79846]: 3.1f scrub ok
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 08 09:47:31 compute-1 ceph-mon[79846]: osdmap e29: 3 total, 2 up, 3 in
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='osd.2 [v2:192.168.122.102:6800/2213880029,v1:192.168.122.102:6801/2213880029]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 08 09:47:31 compute-1 ceph-mon[79846]: pgmap v89: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1181990922' entity='client.admin' 
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:31 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:31 compute-1 sudo[80441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 08 09:47:31 compute-1 sudo[80441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:31 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Dec 08 09:47:31 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 08 09:47:31 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 08 09:47:31 compute-1 sudo[80441]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:32 compute-1 ceph-mon[79846]: 7.1f scrub starts
Dec 08 09:47:32 compute-1 ceph-mon[79846]: 7.1f scrub ok
Dec 08 09:47:32 compute-1 ceph-mon[79846]: 5.19 scrub starts
Dec 08 09:47:32 compute-1 ceph-mon[79846]: 5.19 scrub ok
Dec 08 09:47:32 compute-1 ceph-mon[79846]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 08 09:47:32 compute-1 ceph-mon[79846]: osdmap e30: 3 total, 2 up, 3 in
Dec 08 09:47:32 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:32 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:32 compute-1 ceph-mon[79846]: from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 08 09:47:32 compute-1 ceph-mon[79846]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 08 09:47:32 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:32 compute-1 ceph-mon[79846]: Saving service ingress.rgw.default spec with placement count:2
Dec 08 09:47:32 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:32 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 08 09:47:32 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 08 09:47:33 compute-1 ceph-mon[79846]: purged_snaps scrub starts
Dec 08 09:47:33 compute-1 ceph-mon[79846]: purged_snaps scrub ok
Dec 08 09:47:33 compute-1 ceph-mon[79846]: 7.1c scrub starts
Dec 08 09:47:33 compute-1 ceph-mon[79846]: 7.1c scrub ok
Dec 08 09:47:33 compute-1 ceph-mon[79846]: 3.1e scrub starts
Dec 08 09:47:33 compute-1 ceph-mon[79846]: 3.1e scrub ok
Dec 08 09:47:33 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:33 compute-1 ceph-mon[79846]: pgmap v91: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:33 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:33 compute-1 sudo[80498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:47:33 compute-1 sudo[80498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:33 compute-1 sudo[80498]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:33 compute-1 sudo[80523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:47:33 compute-1 sudo[80523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:33 compute-1 sudo[80523]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:33 compute-1 sudo[80548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:47:33 compute-1 sudo[80548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:33 compute-1 sudo[80548]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:33 compute-1 sudo[80573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:33 compute-1 sudo[80573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:33 compute-1 sudo[80573]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:33 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 08 09:47:33 compute-1 sudo[80598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:47:33 compute-1 sudo[80598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:33 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 08 09:47:33 compute-1 sudo[80598]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:33 compute-1 sudo[80646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:47:33 compute-1 sudo[80646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:33 compute-1 sudo[80646]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:33 compute-1 sudo[80671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:47:33 compute-1 sudo[80671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:33 compute-1 sudo[80671]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:33 compute-1 sudo[80696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 08 09:47:33 compute-1 sudo[80696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:33 compute-1 sudo[80696]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:33 compute-1 sudo[80721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:47:33 compute-1 sudo[80721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:34 compute-1 sudo[80721]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:34 compute-1 sudo[80746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:47:34 compute-1 sudo[80746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:34 compute-1 sudo[80746]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:34 compute-1 sudo[80771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:47:34 compute-1 sudo[80771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:34 compute-1 sudo[80771]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:34 compute-1 sudo[80796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:34 compute-1 sudo[80796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:34 compute-1 sudo[80796]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:34 compute-1 ceph-mon[79846]: 2.18 scrub starts
Dec 08 09:47:34 compute-1 ceph-mon[79846]: 2.18 scrub ok
Dec 08 09:47:34 compute-1 ceph-mon[79846]: 6.18 scrub starts
Dec 08 09:47:34 compute-1 ceph-mon[79846]: 6.18 scrub ok
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Updating compute-0:/etc/ceph/ceph.conf
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.conf
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.conf
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:34 compute-1 ceph-mon[79846]: 7.12 scrub starts
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 08 09:47:34 compute-1 ceph-mon[79846]: 7.12 scrub ok
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Saving service node-exporter spec with placement *
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Saving service grafana spec with placement compute-0;count:1
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Saving service prometheus spec with placement compute-0;count:1
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Saving service alertmanager spec with placement compute-0;count:1
Dec 08 09:47:34 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:47:34 compute-1 ceph-mon[79846]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:47:34 compute-1 sudo[80821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:47:34 compute-1 sudo[80821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:34 compute-1 sudo[80821]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:34 compute-1 sudo[80869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:47:34 compute-1 sudo[80869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:34 compute-1 sudo[80869]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:34 compute-1 sudo[80894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:47:34 compute-1 sudo[80894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:34 compute-1 sudo[80894]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:34 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:34 compute-1 sudo[80919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:47:34 compute-1 sudo[80919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:34 compute-1 sudo[80919]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:34 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 08 09:47:34 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 08 09:47:35 compute-1 ceph-mon[79846]: 5.1d scrub starts
Dec 08 09:47:35 compute-1 ceph-mon[79846]: 5.1d scrub ok
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:35 compute-1 ceph-mon[79846]: pgmap v92: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:35 compute-1 ceph-mon[79846]: 2.16 scrub starts
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:35 compute-1 ceph-mon[79846]: 2.16 scrub ok
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2361686037' entity='client.admin' 
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:35 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:35 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 08 09:47:35 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 08 09:47:35 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e31 e31: 3 total, 3 up, 3 in
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[6.1e( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.282361031s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.039154053s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[6.1e( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.282306671s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.039154053s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.287930489s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044769287s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[7.1f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=30 pruub=13.652628899s) [] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 78.409477234s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[6.1e( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.282294273s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.039154053s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[7.1f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=31 pruub=13.652565002s) [2] r=-1 lpr=31 pi=[26,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.409477234s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.287848473s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044769287s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.287821770s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044769287s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[6.1c( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.287166595s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044189453s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=8.276780128s) [] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 73.033843994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.276759148s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033843994s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[6.1c( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.287102699s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044189453s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.276750565s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033843994s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[6.1c( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.287079811s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044189453s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[7.1f( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=31 pruub=13.652547836s) [2] r=-1 lpr=31 pi=[26,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.409477234s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[6.12( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.287024498s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044433594s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[6.12( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286977768s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044433594s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[6.12( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286966324s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044433594s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.286864281s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044380188s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286818504s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044380188s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286779404s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044380188s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[7.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=30 pruub=13.654935837s) [] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 78.412902832s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[7.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=31 pruub=13.654901505s) [2] r=-1 lpr=31 pi=[26,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.412902832s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[7.11( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=31 pruub=13.654889107s) [2] r=-1 lpr=31 pi=[26,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.412902832s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.286271095s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044410706s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286239624s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044410706s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286231995s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044410706s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[6.17( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.286154747s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044372559s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[6.17( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286127090s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044372559s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[6.17( empty local-lis/les=28/29 n=0 ec=25/17 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286117554s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044372559s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[7.16( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=30 pruub=13.654583931s) [] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 78.412841797s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[7.16( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=31 pruub=13.654491425s) [2] r=-1 lpr=31 pi=[26,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.412841797s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[7.16( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=31 pruub=13.654465675s) [2] r=-1 lpr=31 pi=[26,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.412841797s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.286209106s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044647217s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286187172s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044647217s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.286131859s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044609070s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286180496s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044647217s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286087036s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044609070s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.286180496s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044685364s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286096573s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044685364s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286087036s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044685364s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286031723s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044609070s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=8.274736404s) [] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 73.033462524s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.274707794s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033462524s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.286034584s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044807434s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.274699211s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033462524s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286015511s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044807434s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.286007881s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044807434s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=8.275147438s) [] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 73.034126282s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.275110245s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.034126282s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.275096893s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.034126282s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.285795212s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.044914246s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=8.274186134s) [] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 73.033294678s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[7.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=30 pruub=13.653789520s) [] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 78.412971497s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.274079323s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033294678s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[7.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=31 pruub=13.653735161s) [2] r=-1 lpr=31 pi=[26,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.412971497s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.274065971s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.033294678s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[7.5( empty local-lis/les=26/27 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=31 pruub=13.653723717s) [2] r=-1 lpr=31 pi=[26,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.412971497s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=8.273636818s) [] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 73.032951355s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285586357s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044914246s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.273596764s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032951355s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285561562s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.044914246s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.273582458s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032951355s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.285372734s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.045036316s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.285308838s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.045066833s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285275459s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045066833s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285265923s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045066833s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285218239s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045036316s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=28/29 n=0 ec=23/15 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285202026s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045036316s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.285250664s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.045127869s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285204887s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045127869s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.285158157s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.045150757s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.285220146s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.045257568s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285145760s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045127869s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=8.272837639s) [] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 73.032897949s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285184860s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045257568s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=10.285152435s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 75.045242310s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285058975s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045150757s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.272806168s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032897949s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285129547s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045242310s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285171509s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045257568s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.272791862s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032897949s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=28/29 n=0 ec=23/13 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285045624s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045150757s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=28/29 n=0 ec=25/16 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.285120964s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.045242310s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 30 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=30 pruub=8.272611618s) [] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 73.032890320s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.272589684s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032890320s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:47:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 31 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=8.272582054s) [2] r=-1 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.032890320s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:47:36 compute-1 ceph-mon[79846]: 6.1f scrub starts
Dec 08 09:47:36 compute-1 ceph-mon[79846]: 6.1f scrub ok
Dec 08 09:47:36 compute-1 ceph-mon[79846]: OSD bench result of 5992.083020 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 08 09:47:36 compute-1 ceph-mon[79846]: 2.17 scrub starts
Dec 08 09:47:36 compute-1 ceph-mon[79846]: 2.17 scrub ok
Dec 08 09:47:36 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2766465450' entity='client.admin' 
Dec 08 09:47:36 compute-1 ceph-mon[79846]: osd.2 [v2:192.168.122.102:6800/2213880029,v1:192.168.122.102:6801/2213880029] boot
Dec 08 09:47:36 compute-1 ceph-mon[79846]: osdmap e31: 3 total, 3 up, 3 in
Dec 08 09:47:36 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:36 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 08 09:47:36 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 08 09:47:36 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e32 e32: 3 total, 3 up, 3 in
Dec 08 09:47:37 compute-1 ceph-mon[79846]: 6.c scrub starts
Dec 08 09:47:37 compute-1 ceph-mon[79846]: 6.c scrub ok
Dec 08 09:47:37 compute-1 ceph-mon[79846]: pgmap v94: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:47:37 compute-1 ceph-mon[79846]: 2.14 scrub starts
Dec 08 09:47:37 compute-1 ceph-mon[79846]: 2.14 scrub ok
Dec 08 09:47:37 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1226724288' entity='client.admin' 
Dec 08 09:47:37 compute-1 ceph-mon[79846]: osdmap e32: 3 total, 3 up, 3 in
Dec 08 09:47:37 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Dec 08 09:47:37 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Dec 08 09:47:37 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e33 e33: 3 total, 3 up, 3 in
Dec 08 09:47:38 compute-1 sshd-session[80944]: Received disconnect from 95.128.196.223 port 40794:11: Bye Bye [preauth]
Dec 08 09:47:38 compute-1 sshd-session[80944]: Disconnected from authenticating user root 95.128.196.223 port 40794 [preauth]
Dec 08 09:47:38 compute-1 ceph-mon[79846]: 4.f scrub starts
Dec 08 09:47:38 compute-1 ceph-mon[79846]: 4.f scrub ok
Dec 08 09:47:38 compute-1 ceph-mon[79846]: 3.4 scrub starts
Dec 08 09:47:38 compute-1 ceph-mon[79846]: 3.4 scrub ok
Dec 08 09:47:38 compute-1 ceph-mon[79846]: 7.17 deep-scrub starts
Dec 08 09:47:38 compute-1 ceph-mon[79846]: 7.17 deep-scrub ok
Dec 08 09:47:38 compute-1 ceph-mon[79846]: osdmap e33: 3 total, 3 up, 3 in
Dec 08 09:47:38 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:38 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 08 09:47:38 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 08 09:47:38 compute-1 sudo[80969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jccxbcdthhblljjxwdoxlwonlyjzxmsh ; /usr/bin/python3'
Dec 08 09:47:38 compute-1 sudo[80969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:47:39 compute-1 python3[80971]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 08 09:47:39 compute-1 sudo[80969]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:39 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:39 compute-1 ceph-mon[79846]: 7.11 scrub starts
Dec 08 09:47:39 compute-1 ceph-mon[79846]: 7.11 scrub ok
Dec 08 09:47:39 compute-1 ceph-mon[79846]: pgmap v97: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:47:39 compute-1 ceph-mon[79846]: 4.4 scrub starts
Dec 08 09:47:39 compute-1 ceph-mon[79846]: 4.4 scrub ok
Dec 08 09:47:39 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2940995855' entity='client.admin' 
Dec 08 09:47:39 compute-1 ceph-mon[79846]: 2.11 scrub starts
Dec 08 09:47:39 compute-1 ceph-mon[79846]: 2.11 scrub ok
Dec 08 09:47:39 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:39 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 08 09:47:39 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 08 09:47:40 compute-1 ceph-mon[79846]: 6.1c scrub starts
Dec 08 09:47:40 compute-1 ceph-mon[79846]: 6.1c scrub ok
Dec 08 09:47:40 compute-1 ceph-mon[79846]: 5.5 scrub starts
Dec 08 09:47:40 compute-1 ceph-mon[79846]: 5.5 scrub ok
Dec 08 09:47:40 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:40 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.dimexm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 08 09:47:40 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.dimexm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 08 09:47:40 compute-1 ceph-mon[79846]: 7.15 scrub starts
Dec 08 09:47:40 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:40 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:40 compute-1 ceph-mon[79846]: Deploying daemon rgw.rgw.compute-2.dimexm on compute-2
Dec 08 09:47:40 compute-1 ceph-mon[79846]: 7.15 scrub ok
Dec 08 09:47:40 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1238176050' entity='client.admin' 
Dec 08 09:47:40 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 08 09:47:40 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 08 09:47:41 compute-1 sudo[80985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:47:41 compute-1 sudo[80985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:41 compute-1 sudo[80985]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:41 compute-1 ceph-mon[79846]: 3.e scrub starts
Dec 08 09:47:41 compute-1 ceph-mon[79846]: 3.e scrub ok
Dec 08 09:47:41 compute-1 ceph-mon[79846]: pgmap v98: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:47:41 compute-1 ceph-mon[79846]: 6.6 scrub starts
Dec 08 09:47:41 compute-1 ceph-mon[79846]: 6.6 scrub ok
Dec 08 09:47:41 compute-1 ceph-mon[79846]: 2.3 scrub starts
Dec 08 09:47:41 compute-1 ceph-mon[79846]: 2.3 scrub ok
Dec 08 09:47:41 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:41 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:41 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:41 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.rblbpq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 08 09:47:41 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.rblbpq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 08 09:47:41 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:41 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:41 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3406242588' entity='client.admin' 
Dec 08 09:47:41 compute-1 sudo[81010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:41 compute-1 sudo[81010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:41 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 08 09:47:41 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 08 09:47:42 compute-1 podman[81075]: 2025-12-08 09:47:42.102479771 +0000 UTC m=+0.055439533 container create 80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:47:42 compute-1 systemd[1]: Started libpod-conmon-80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55.scope.
Dec 08 09:47:42 compute-1 podman[81075]: 2025-12-08 09:47:42.067552378 +0000 UTC m=+0.020512120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:47:42 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:47:42 compute-1 podman[81075]: 2025-12-08 09:47:42.226038759 +0000 UTC m=+0.178998581 container init 80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_engelbart, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 08 09:47:42 compute-1 podman[81075]: 2025-12-08 09:47:42.238661592 +0000 UTC m=+0.191621344 container start 80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 08 09:47:42 compute-1 podman[81075]: 2025-12-08 09:47:42.244817849 +0000 UTC m=+0.197777661 container attach 80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_engelbart, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 08 09:47:42 compute-1 vigilant_engelbart[81091]: 167 167
Dec 08 09:47:42 compute-1 systemd[1]: libpod-80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55.scope: Deactivated successfully.
Dec 08 09:47:42 compute-1 podman[81075]: 2025-12-08 09:47:42.249652738 +0000 UTC m=+0.202612520 container died 80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_engelbart, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 08 09:47:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-fdb422f00ed007358f34a898d3e5d34cc57434030965f305a3255be6cf7e2bb9-merged.mount: Deactivated successfully.
Dec 08 09:47:42 compute-1 podman[81075]: 2025-12-08 09:47:42.302522606 +0000 UTC m=+0.255482338 container remove 80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_engelbart, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:47:42 compute-1 systemd[1]: libpod-conmon-80a6eb89c528a27c1af1383b4ce5da22afafca7480d201367cc0abcb5478cc55.scope: Deactivated successfully.
Dec 08 09:47:42 compute-1 systemd[1]: Reloading.
Dec 08 09:47:42 compute-1 systemd-rc-local-generator[81133]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:47:42 compute-1 systemd-sysv-generator[81137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:47:42 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Dec 08 09:47:42 compute-1 ceph-mon[79846]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 08 09:47:42 compute-1 ceph-mon[79846]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1719203410' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 08 09:47:42 compute-1 ceph-mon[79846]: 6.17 deep-scrub starts
Dec 08 09:47:42 compute-1 ceph-mon[79846]: 6.17 deep-scrub ok
Dec 08 09:47:42 compute-1 ceph-mon[79846]: Deploying daemon rgw.rgw.compute-1.rblbpq on compute-1
Dec 08 09:47:42 compute-1 ceph-mon[79846]: 3.2 scrub starts
Dec 08 09:47:42 compute-1 ceph-mon[79846]: 3.2 scrub ok
Dec 08 09:47:42 compute-1 ceph-mon[79846]: 2.0 scrub starts
Dec 08 09:47:42 compute-1 ceph-mon[79846]: 2.0 scrub ok
Dec 08 09:47:42 compute-1 ceph-mon[79846]: osdmap e34: 3 total, 3 up, 3 in
Dec 08 09:47:42 compute-1 ceph-mon[79846]: from='client.? 192.168.122.102:0/1719203410' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 08 09:47:42 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 08 09:47:42 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/698056903' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 08 09:47:42 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 08 09:47:42 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 08 09:47:42 compute-1 systemd[1]: Reloading.
Dec 08 09:47:42 compute-1 systemd-sysv-generator[81173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:47:42 compute-1 systemd-rc-local-generator[81169]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:47:42 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.rblbpq for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:47:43 compute-1 podman[81230]: 2025-12-08 09:47:43.347397775 +0000 UTC m=+0.060509468 container create 2666e5efb041811dc712985806c5277dfe26616ea7f8aea5e30ae84cfc7d3b64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-rgw-rgw-compute-1-rblbpq, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:47:43 compute-1 podman[81230]: 2025-12-08 09:47:43.319442703 +0000 UTC m=+0.032554456 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:47:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7f9cf53cba2726fb48580939f2fda499ada99e85baaef27f8f6470c421e7a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:47:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7f9cf53cba2726fb48580939f2fda499ada99e85baaef27f8f6470c421e7a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:47:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7f9cf53cba2726fb48580939f2fda499ada99e85baaef27f8f6470c421e7a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:47:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7f9cf53cba2726fb48580939f2fda499ada99e85baaef27f8f6470c421e7a0/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.rblbpq supports timestamps until 2038 (0x7fffffff)
Dec 08 09:47:43 compute-1 podman[81230]: 2025-12-08 09:47:43.441757246 +0000 UTC m=+0.154868989 container init 2666e5efb041811dc712985806c5277dfe26616ea7f8aea5e30ae84cfc7d3b64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-rgw-rgw-compute-1-rblbpq, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 08 09:47:43 compute-1 podman[81230]: 2025-12-08 09:47:43.452239527 +0000 UTC m=+0.165351220 container start 2666e5efb041811dc712985806c5277dfe26616ea7f8aea5e30ae84cfc7d3b64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-rgw-rgw-compute-1-rblbpq, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 08 09:47:43 compute-1 bash[81230]: 2666e5efb041811dc712985806c5277dfe26616ea7f8aea5e30ae84cfc7d3b64
Dec 08 09:47:43 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.rblbpq for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:47:43 compute-1 radosgw[81249]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 08 09:47:43 compute-1 radosgw[81249]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec 08 09:47:43 compute-1 radosgw[81249]: framework: beast
Dec 08 09:47:43 compute-1 radosgw[81249]: framework conf key: endpoint, val: 192.168.122.101:8082
Dec 08 09:47:43 compute-1 radosgw[81249]: init_numa not setting numa affinity
Dec 08 09:47:43 compute-1 sudo[81010]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:43 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Dec 08 09:47:43 compute-1 ceph-mon[79846]: 7.16 scrub starts
Dec 08 09:47:43 compute-1 ceph-mon[79846]: 7.16 scrub ok
Dec 08 09:47:43 compute-1 ceph-mon[79846]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:47:43 compute-1 ceph-mon[79846]: 3.1 scrub starts
Dec 08 09:47:43 compute-1 ceph-mon[79846]: 3.1 scrub ok
Dec 08 09:47:43 compute-1 ceph-mon[79846]: 7.0 scrub starts
Dec 08 09:47:43 compute-1 ceph-mon[79846]: 7.0 scrub ok
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 08 09:47:43 compute-1 ceph-mon[79846]: osdmap e35: 3 total, 3 up, 3 in
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/698056903' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:43 compute-1 ceph-mon[79846]: mgrmap e12: compute-0.kitiwu(active, since 2m), standbys: compute-2.zqytsv, compute-1.mmkaif
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:43 compute-1 ceph-mon[79846]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:43 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 08 09:47:43 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 08 09:47:44 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Dec 08 09:47:44 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:44 compute-1 ceph-mon[79846]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 08 09:47:44 compute-1 ceph-mon[79846]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 08 09:47:44 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 08 09:47:44 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 08 09:47:44 compute-1 ceph-mon[79846]: 2.12 scrub starts
Dec 08 09:47:44 compute-1 ceph-mon[79846]: 2.12 scrub ok
Dec 08 09:47:44 compute-1 ceph-mon[79846]: 6.4 deep-scrub starts
Dec 08 09:47:44 compute-1 ceph-mon[79846]: 6.4 deep-scrub ok
Dec 08 09:47:44 compute-1 ceph-mon[79846]: Deploying daemon rgw.rgw.compute-0.slkrtm on compute-0
Dec 08 09:47:44 compute-1 ceph-mon[79846]: 2.2 scrub starts
Dec 08 09:47:44 compute-1 ceph-mon[79846]: 2.2 scrub ok
Dec 08 09:47:44 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1032861131' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 08 09:47:44 compute-1 ceph-mon[79846]: osdmap e36: 3 total, 3 up, 3 in
Dec 08 09:47:44 compute-1 ceph-mon[79846]: from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 08 09:47:44 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 08 09:47:44 compute-1 ceph-mon[79846]: from='client.? 192.168.122.102:0/2102705496' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 08 09:47:44 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  1: '-n'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  2: 'mgr.compute-1.mmkaif'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  3: '-f'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  4: '--setuser'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  5: 'ceph'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  6: '--setgroup'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  7: 'ceph'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  8: '--default-log-to-file=false'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  9: '--default-log-to-journald=true'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr respawn  exe_path /proc/self/exe
Dec 08 09:47:44 compute-1 sshd-session[72914]: Connection closed by 192.168.122.100 port 40316
Dec 08 09:47:44 compute-1 sshd-session[72858]: Connection closed by 192.168.122.100 port 40296
Dec 08 09:47:44 compute-1 sshd-session[72829]: Connection closed by 192.168.122.100 port 40284
Dec 08 09:47:44 compute-1 sshd-session[72742]: Connection closed by 192.168.122.100 port 40244
Dec 08 09:47:44 compute-1 sshd-session[72885]: Connection closed by 192.168.122.100 port 40302
Dec 08 09:47:44 compute-1 sshd-session[72684]: Connection closed by 192.168.122.100 port 40234
Dec 08 09:47:44 compute-1 sshd-session[72800]: Connection closed by 192.168.122.100 port 40268
Dec 08 09:47:44 compute-1 sshd-session[72655]: Connection closed by 192.168.122.100 port 40230
Dec 08 09:47:44 compute-1 sshd-session[72771]: Connection closed by 192.168.122.100 port 40252
Dec 08 09:47:44 compute-1 sshd-session[72713]: Connection closed by 192.168.122.100 port 40238
Dec 08 09:47:44 compute-1 sshd-session[72625]: Connection closed by 192.168.122.100 port 40222
Dec 08 09:47:44 compute-1 sshd-session[72626]: Connection closed by 192.168.122.100 port 40228
Dec 08 09:47:44 compute-1 sshd-session[72826]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 sshd-session[72911]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 29 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 sshd-session[72768]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 systemd[1]: session-32.scope: Consumed 1min 5.505s CPU time.
Dec 08 09:47:44 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 sshd-session[72710]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 32 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 27 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 25 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 sshd-session[72652]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 sshd-session[72602]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 sshd-session[72797]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 23 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 systemd[1]: session-20.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 20 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 sshd-session[72612]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 29.
Dec 08 09:47:44 compute-1 sshd-session[72882]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd[1]: session-22.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 22 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 28 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: ignoring --setuser ceph since I am not root
Dec 08 09:47:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: ignoring --setgroup ceph since I am not root
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 31 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 sshd-session[72681]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 sshd-session[72855]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 sshd-session[72739]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:44 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 24 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 30 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Session 26 logged out. Waiting for processes to exit.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 32.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 27.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 25.
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 23.
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: pidfile_write: ignore empty --pid-file
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 20.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 22.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 28.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 31.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 24.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 30.
Dec 08 09:47:44 compute-1 systemd-logind[795]: Removed session 26.
Dec 08 09:47:44 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'alerts'
Dec 08 09:47:45 compute-1 ceph-mgr[80153]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:47:45 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'balancer'
Dec 08 09:47:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:45.012+0000 7f8a0cc02140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:47:45 compute-1 ceph-mgr[80153]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:47:45 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'cephadm'
Dec 08 09:47:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:45.091+0000 7f8a0cc02140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:47:45 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Dec 08 09:47:45 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 08 09:47:45 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 08 09:47:45 compute-1 ceph-mon[79846]: 6.1e deep-scrub starts
Dec 08 09:47:45 compute-1 ceph-mon[79846]: 6.1e deep-scrub ok
Dec 08 09:47:45 compute-1 ceph-mon[79846]: 6.0 scrub starts
Dec 08 09:47:45 compute-1 ceph-mon[79846]: 6.0 scrub ok
Dec 08 09:47:45 compute-1 ceph-mon[79846]: 7.7 scrub starts
Dec 08 09:47:45 compute-1 ceph-mon[79846]: 7.7 scrub ok
Dec 08 09:47:45 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1032861131' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 08 09:47:45 compute-1 ceph-mon[79846]: mgrmap e13: compute-0.kitiwu(active, since 2m), standbys: compute-2.zqytsv, compute-1.mmkaif
Dec 08 09:47:45 compute-1 ceph-mon[79846]: 4.15 scrub starts
Dec 08 09:47:45 compute-1 ceph-mon[79846]: 4.15 scrub ok
Dec 08 09:47:45 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 08 09:47:45 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 08 09:47:45 compute-1 ceph-mon[79846]: osdmap e37: 3 total, 3 up, 3 in
Dec 08 09:47:45 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'crash'
Dec 08 09:47:45 compute-1 ceph-mgr[80153]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:47:45 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'dashboard'
Dec 08 09:47:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:45.913+0000 7f8a0cc02140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'devicehealth'
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'diskprediction_local'
Dec 08 09:47:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:46.553+0000 7f8a0cc02140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Dec 08 09:47:46 compute-1 ceph-mon[79846]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 08 09:47:46 compute-1 ceph-mon[79846]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 08 09:47:46 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 08 09:47:46 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 38 pg[10.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:47:46 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 08 09:47:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 08 09:47:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 08 09:47:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]:   from numpy import show_config as show_numpy_config
Dec 08 09:47:46 compute-1 ceph-mon[79846]: 5.3 scrub starts
Dec 08 09:47:46 compute-1 ceph-mon[79846]: 5.3 scrub ok
Dec 08 09:47:46 compute-1 ceph-mon[79846]: 7.1 scrub starts
Dec 08 09:47:46 compute-1 ceph-mon[79846]: 7.1 scrub ok
Dec 08 09:47:46 compute-1 ceph-mon[79846]: 4.9 scrub starts
Dec 08 09:47:46 compute-1 ceph-mon[79846]: 4.9 scrub ok
Dec 08 09:47:46 compute-1 ceph-mon[79846]: osdmap e38: 3 total, 3 up, 3 in
Dec 08 09:47:46 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 08 09:47:46 compute-1 ceph-mon[79846]: from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 08 09:47:46 compute-1 ceph-mon[79846]: from='client.? 192.168.122.102:0/2102705496' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 08 09:47:46 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 08 09:47:46 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 08 09:47:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:46.722+0000 7f8a0cc02140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'influx'
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'insights'
Dec 08 09:47:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:46.796+0000 7f8a0cc02140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'iostat'
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:46.938+0000 7f8a0cc02140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:47:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'k8sevents'
Dec 08 09:47:47 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'localpool'
Dec 08 09:47:47 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mds_autoscaler'
Dec 08 09:47:47 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Dec 08 09:47:47 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 39 pg[10.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:47:47 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mirroring'
Dec 08 09:47:47 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 08 09:47:47 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 08 09:47:47 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'nfs'
Dec 08 09:47:47 compute-1 ceph-mon[79846]: 3.6 scrub starts
Dec 08 09:47:47 compute-1 ceph-mon[79846]: 3.6 scrub ok
Dec 08 09:47:47 compute-1 ceph-mon[79846]: 7.d scrub starts
Dec 08 09:47:47 compute-1 ceph-mon[79846]: 7.d scrub ok
Dec 08 09:47:47 compute-1 ceph-mon[79846]: 2.f scrub starts
Dec 08 09:47:47 compute-1 ceph-mon[79846]: 2.f scrub ok
Dec 08 09:47:47 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 08 09:47:47 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 08 09:47:47 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 08 09:47:47 compute-1 ceph-mon[79846]: osdmap e39: 3 total, 3 up, 3 in
Dec 08 09:47:47 compute-1 ceph-mgr[80153]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:47:47 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:47.959+0000 7f8a0cc02140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:47:47 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'orchestrator'
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_perf_query'
Dec 08 09:47:48 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:48.185+0000 7f8a0cc02140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:48.263+0000 7f8a0cc02140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_support'
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:48.330+0000 7f8a0cc02140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'pg_autoscaler'
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:48.413+0000 7f8a0cc02140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'progress'
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:48.483+0000 7f8a0cc02140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'prometheus'
Dec 08 09:47:48 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Dec 08 09:47:48 compute-1 ceph-mon[79846]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 08 09:47:48 compute-1 ceph-mon[79846]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 08 09:47:48 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 08 09:47:48 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 08 09:47:48 compute-1 ceph-mon[79846]: 4.0 scrub starts
Dec 08 09:47:48 compute-1 ceph-mon[79846]: 4.0 scrub ok
Dec 08 09:47:48 compute-1 ceph-mon[79846]: 7.c scrub starts
Dec 08 09:47:48 compute-1 ceph-mon[79846]: 7.c scrub ok
Dec 08 09:47:48 compute-1 ceph-mon[79846]: 5.e scrub starts
Dec 08 09:47:48 compute-1 ceph-mon[79846]: 5.e scrub ok
Dec 08 09:47:48 compute-1 ceph-mon[79846]: osdmap e40: 3 total, 3 up, 3 in
Dec 08 09:47:48 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 08 09:47:48 compute-1 ceph-mon[79846]: from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 08 09:47:48 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 08 09:47:48 compute-1 ceph-mon[79846]: from='client.? 192.168.122.102:0/2102705496' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 08 09:47:48 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:48.838+0000 7f8a0cc02140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rbd_support'
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:48.934+0000 7f8a0cc02140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:47:48 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'restful'
Dec 08 09:47:49 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rgw'
Dec 08 09:47:49 compute-1 ceph-mgr[80153]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:47:49 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:49.367+0000 7f8a0cc02140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:47:49 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rook'
Dec 08 09:47:49 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:49 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Dec 08 09:47:49 compute-1 ceph-mon[79846]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 08 09:47:49 compute-1 ceph-mon[79846]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 08 09:47:49 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 08 09:47:49 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 08 09:47:49 compute-1 ceph-mon[79846]: 3.7 scrub starts
Dec 08 09:47:49 compute-1 ceph-mon[79846]: 3.7 scrub ok
Dec 08 09:47:49 compute-1 ceph-mon[79846]: 7.19 scrub starts
Dec 08 09:47:49 compute-1 ceph-mon[79846]: 7.19 scrub ok
Dec 08 09:47:49 compute-1 ceph-mon[79846]: 3.11 scrub starts
Dec 08 09:47:49 compute-1 ceph-mon[79846]: 3.11 scrub ok
Dec 08 09:47:49 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 08 09:47:49 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 08 09:47:49 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 08 09:47:49 compute-1 ceph-mon[79846]: osdmap e41: 3 total, 3 up, 3 in
Dec 08 09:47:49 compute-1 ceph-mon[79846]: from='client.? 192.168.122.102:0/2102705496' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 08 09:47:49 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 08 09:47:49 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 08 09:47:49 compute-1 ceph-mon[79846]: from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 08 09:47:49 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 08 09:47:49 compute-1 ceph-mgr[80153]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:47:49 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'selftest'
Dec 08 09:47:49 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:49.935+0000 7f8a0cc02140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'snap_schedule'
Dec 08 09:47:50 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:50.006+0000 7f8a0cc02140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'stats'
Dec 08 09:47:50 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:50.080+0000 7f8a0cc02140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'status'
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:50.225+0000 7f8a0cc02140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telegraf'
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:50.293+0000 7f8a0cc02140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telemetry'
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'test_orchestrator'
Dec 08 09:47:50 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:50.447+0000 7f8a0cc02140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:50.669+0000 7f8a0cc02140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'volumes'
Dec 08 09:47:50 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 08 09:47:50 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 08 09:47:50 compute-1 ceph-mon[79846]: 4.7 deep-scrub starts
Dec 08 09:47:50 compute-1 ceph-mon[79846]: 4.7 deep-scrub ok
Dec 08 09:47:50 compute-1 ceph-mon[79846]: 7.1a scrub starts
Dec 08 09:47:50 compute-1 ceph-mon[79846]: 7.1a scrub ok
Dec 08 09:47:50 compute-1 ceph-mon[79846]: 3.1a scrub starts
Dec 08 09:47:50 compute-1 ceph-mon[79846]: 3.1a scrub ok
Dec 08 09:47:50 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 08 09:47:50 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 08 09:47:50 compute-1 ceph-mon[79846]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 08 09:47:50 compute-1 ceph-mon[79846]: osdmap e42: 3 total, 3 up, 3 in
Dec 08 09:47:50 compute-1 radosgw[81249]: v1 topic migration: starting v1 topic migration..
Dec 08 09:47:50 compute-1 radosgw[81249]: LDAP not started since no server URIs were provided in the configuration.
Dec 08 09:47:50 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-rgw-rgw-compute-1-rblbpq[81245]: 2025-12-08T09:47:50.828+0000 7fb02fd1e980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 08 09:47:50 compute-1 radosgw[81249]: v1 topic migration: finished v1 topic migration
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 08 09:47:50 compute-1 radosgw[81249]: framework: beast
Dec 08 09:47:50 compute-1 radosgw[81249]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 08 09:47:50 compute-1 radosgw[81249]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec 08 09:47:50 compute-1 radosgw[81249]: starting handler: beast
Dec 08 09:47:50 compute-1 radosgw[81249]: set uid:gid to 167:167 (ceph:ceph)
Dec 08 09:47:50 compute-1 radosgw[81249]: mgrc service_daemon_register rgw.24149 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.rblbpq,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f2fa6c7a-b392-4a6f-84e7-a8a07770c620,zone_name=default,zonegroup_id=68492763-3f06-49eb-87b1-edc419fff75a,zonegroup_name=default}
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:50.941+0000 7f8a0cc02140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:47:50 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'zabbix'
Dec 08 09:47:50 compute-1 radosgw[81249]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Dec 08 09:47:51 compute-1 ceph-mgr[80153]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:47:51 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:51.013+0000 7f8a0cc02140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:47:51 compute-1 ceph-mgr[80153]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 08 09:47:51 compute-1 ceph-mgr[80153]: mgr load Constructed class from module: dashboard
Dec 08 09:47:51 compute-1 ceph-mgr[80153]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 08 09:47:51 compute-1 ceph-mgr[80153]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 08 09:47:51 compute-1 ceph-mgr[80153]: [dashboard INFO root] Starting engine...
Dec 08 09:47:51 compute-1 ceph-mgr[80153]: ms_deliver_dispatch: unhandled message 0x5632842dd860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 08 09:47:51 compute-1 ceph-mgr[80153]: [dashboard INFO root] Engine started...
Dec 08 09:47:51 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Dec 08 09:47:51 compute-1 sshd-session[81915]: Accepted publickey for ceph-admin from 192.168.122.100 port 53878 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:47:51 compute-1 systemd-logind[795]: New session 33 of user ceph-admin.
Dec 08 09:47:51 compute-1 systemd[1]: Started Session 33 of User ceph-admin.
Dec 08 09:47:51 compute-1 sshd-session[81915]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:47:51 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 08 09:47:51 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 08 09:47:51 compute-1 ceph-mon[79846]: 5.6 scrub starts
Dec 08 09:47:51 compute-1 ceph-mon[79846]: 5.6 scrub ok
Dec 08 09:47:51 compute-1 ceph-mon[79846]: 5.1f scrub starts
Dec 08 09:47:51 compute-1 ceph-mon[79846]: 5.1f scrub ok
Dec 08 09:47:51 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv restarted
Dec 08 09:47:51 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv started
Dec 08 09:47:51 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif restarted
Dec 08 09:47:51 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif started
Dec 08 09:47:51 compute-1 ceph-mon[79846]: 7.5 scrub starts
Dec 08 09:47:51 compute-1 ceph-mon[79846]: 7.5 scrub ok
Dec 08 09:47:51 compute-1 ceph-mon[79846]: Active manager daemon compute-0.kitiwu restarted
Dec 08 09:47:51 compute-1 ceph-mon[79846]: Activating manager daemon compute-0.kitiwu
Dec 08 09:47:51 compute-1 ceph-mon[79846]: osdmap e43: 3 total, 3 up, 3 in
Dec 08 09:47:51 compute-1 ceph-mon[79846]: mgrmap e14: compute-0.kitiwu(active, starting, since 0.0342847s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: Manager daemon compute-0.kitiwu is now available
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec 08 09:47:51 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec 08 09:47:51 compute-1 sudo[81919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:47:51 compute-1 sudo[81919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:51 compute-1 sudo[81919]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:51 compute-1 sudo[81944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 08 09:47:51 compute-1 sudo[81944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:52 compute-1 podman[82039]: 2025-12-08 09:47:52.62464812 +0000 UTC m=+0.090076238 container exec 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 08 09:47:52 compute-1 podman[82039]: 2025-12-08 09:47:52.739425696 +0000 UTC m=+0.204853764 container exec_died 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 08 09:47:52 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 08 09:47:52 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 08 09:47:53 compute-1 ceph-mon[79846]: 5.c scrub starts
Dec 08 09:47:53 compute-1 ceph-mon[79846]: 5.c scrub ok
Dec 08 09:47:53 compute-1 ceph-mon[79846]: 5.10 scrub starts
Dec 08 09:47:53 compute-1 ceph-mon[79846]: 5.10 scrub ok
Dec 08 09:47:53 compute-1 ceph-mon[79846]: 2.5 scrub starts
Dec 08 09:47:53 compute-1 ceph-mon[79846]: 2.5 scrub ok
Dec 08 09:47:53 compute-1 ceph-mon[79846]: mgrmap e15: compute-0.kitiwu(active, since 1.06674s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:47:53 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:53 compute-1 ceph-mon[79846]: [08/Dec/2025:09:47:52] ENGINE Bus STARTING
Dec 08 09:47:53 compute-1 ceph-mon[79846]: [08/Dec/2025:09:47:52] ENGINE Serving on http://192.168.122.100:8765
Dec 08 09:47:53 compute-1 ceph-mon[79846]: [08/Dec/2025:09:47:52] ENGINE Serving on https://192.168.122.100:7150
Dec 08 09:47:53 compute-1 ceph-mon[79846]: [08/Dec/2025:09:47:52] ENGINE Bus STARTED
Dec 08 09:47:53 compute-1 ceph-mon[79846]: [08/Dec/2025:09:47:52] ENGINE Client ('192.168.122.100', 50968) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 08 09:47:53 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:53 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:53 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:53 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:53 compute-1 sudo[81944]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:53 compute-1 sudo[82144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:47:53 compute-1 sudo[82144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:53 compute-1 sudo[82144]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:53 compute-1 sudo[82169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 08 09:47:53 compute-1 sudo[82169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:53 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.14 deep-scrub starts
Dec 08 09:47:53 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.14 deep-scrub ok
Dec 08 09:47:53 compute-1 sudo[82169]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:54 compute-1 sudo[82224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:47:54 compute-1 sudo[82224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:54 compute-1 sudo[82224]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:54 compute-1 sudo[82249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 08 09:47:54 compute-1 sudo[82249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:54 compute-1 ceph-mon[79846]: 6.f scrub starts
Dec 08 09:47:54 compute-1 ceph-mon[79846]: 6.f scrub ok
Dec 08 09:47:54 compute-1 ceph-mon[79846]: 5.11 scrub starts
Dec 08 09:47:54 compute-1 ceph-mon[79846]: 5.11 scrub ok
Dec 08 09:47:54 compute-1 ceph-mon[79846]: 4.1 scrub starts
Dec 08 09:47:54 compute-1 ceph-mon[79846]: 4.1 scrub ok
Dec 08 09:47:54 compute-1 ceph-mon[79846]: from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 08 09:47:54 compute-1 ceph-mon[79846]: pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:47:54 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:54 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:54 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:54 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:54 compute-1 ceph-mon[79846]: mgrmap e16: compute-0.kitiwu(active, since 3s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:47:54 compute-1 sudo[82249]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:54 compute-1 sudo[82291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:47:54 compute-1 sudo[82291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:54 compute-1 sudo[82291]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:54 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:54 compute-1 sudo[82316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:47:54 compute-1 sudo[82316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:54 compute-1 sudo[82316]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:54 compute-1 sudo[82341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:47:54 compute-1 sudo[82341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:54 compute-1 sudo[82341]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:54 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 08 09:47:54 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 08 09:47:54 compute-1 sudo[82366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:54 compute-1 sudo[82366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:54 compute-1 sudo[82366]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:54 compute-1 sudo[82391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:47:54 compute-1 sudo[82391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:54 compute-1 sudo[82391]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:54 compute-1 sudo[82439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:47:54 compute-1 sudo[82439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:54 compute-1 sudo[82439]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:47:55 compute-1 sudo[82464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82464]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 08 09:47:55 compute-1 sudo[82489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82489]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:47:55 compute-1 sudo[82514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82514]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 ceph-mon[79846]: 3.b deep-scrub starts
Dec 08 09:47:55 compute-1 ceph-mon[79846]: 3.b deep-scrub ok
Dec 08 09:47:55 compute-1 ceph-mon[79846]: 3.14 deep-scrub starts
Dec 08 09:47:55 compute-1 ceph-mon[79846]: 3.14 deep-scrub ok
Dec 08 09:47:55 compute-1 ceph-mon[79846]: 4.8 scrub starts
Dec 08 09:47:55 compute-1 ceph-mon[79846]: 4.8 scrub ok
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 08 09:47:55 compute-1 ceph-mon[79846]: Updating compute-0:/etc/ceph/ceph.conf
Dec 08 09:47:55 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.conf
Dec 08 09:47:55 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.conf
Dec 08 09:47:55 compute-1 ceph-mon[79846]: 5.15 scrub starts
Dec 08 09:47:55 compute-1 ceph-mon[79846]: 5.15 scrub ok
Dec 08 09:47:55 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:55 compute-1 sudo[82539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:47:55 compute-1 sudo[82539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82539]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:47:55 compute-1 sudo[82564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82564]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:55 compute-1 sudo[82589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82589]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:47:55 compute-1 sudo[82614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82614]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:47:55 compute-1 sudo[82662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82662]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 08 09:47:55 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 08 09:47:55 compute-1 sudo[82687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:47:55 compute-1 sudo[82687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82687]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:47:55 compute-1 sudo[82712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82712]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:47:55 compute-1 sudo[82737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82737]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:55 compute-1 sudo[82762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:47:55 compute-1 sudo[82762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:55 compute-1 sudo[82762]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[82787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:47:56 compute-1 sudo[82787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[82787]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[82812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:56 compute-1 sudo[82812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[82812]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[82837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:47:56 compute-1 sudo[82837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[82837]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 ceph-mon[79846]: 4.b scrub starts
Dec 08 09:47:56 compute-1 ceph-mon[79846]: 4.b scrub ok
Dec 08 09:47:56 compute-1 ceph-mon[79846]: from='client.24187 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 08 09:47:56 compute-1 ceph-mon[79846]: 6.12 scrub starts
Dec 08 09:47:56 compute-1 ceph-mon[79846]: 6.12 scrub ok
Dec 08 09:47:56 compute-1 ceph-mon[79846]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:47:56 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:47:56 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:47:56 compute-1 ceph-mon[79846]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:47:56 compute-1 ceph-mon[79846]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:47:56 compute-1 ceph-mon[79846]: 4.13 scrub starts
Dec 08 09:47:56 compute-1 ceph-mon[79846]: 4.13 scrub ok
Dec 08 09:47:56 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:47:56 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:47:56 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:56 compute-1 ceph-mon[79846]: mgrmap e17: compute-0.kitiwu(active, since 4s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:47:56 compute-1 sudo[82885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:47:56 compute-1 sudo[82885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[82885]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[82910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:47:56 compute-1 sudo[82910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[82910]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[82935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 08 09:47:56 compute-1 sudo[82935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[82935]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[82960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:47:56 compute-1 sudo[82960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[82960]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[82985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:47:56 compute-1 sudo[82985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[82985]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[83010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:47:56 compute-1 sudo[83010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[83010]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[83035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:47:56 compute-1 sudo[83035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[83035]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[83060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:47:56 compute-1 sudo[83060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[83060]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 08 09:47:56 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 08 09:47:56 compute-1 sudo[83108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:47:56 compute-1 sudo[83108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[83108]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[83133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:47:56 compute-1 sudo[83133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[83133]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:56 compute-1 sudo[83158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:47:56 compute-1 sudo[83158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:47:56 compute-1 sudo[83158]: pam_unix(sudo:session): session closed for user root
Dec 08 09:47:57 compute-1 ceph-mon[79846]: 5.a scrub starts
Dec 08 09:47:57 compute-1 ceph-mon[79846]: 5.a scrub ok
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='client.14445 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 08 09:47:57 compute-1 ceph-mon[79846]: 5.4 scrub starts
Dec 08 09:47:57 compute-1 ceph-mon[79846]: 5.4 scrub ok
Dec 08 09:47:57 compute-1 ceph-mon[79846]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:47:57 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:47:57 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:47:57 compute-1 ceph-mon[79846]: 3.13 scrub starts
Dec 08 09:47:57 compute-1 ceph-mon[79846]: 3.13 scrub ok
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4114852922' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:57 compute-1 ceph-mon[79846]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec 08 09:47:57 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 08 09:47:57 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  1: '-n'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  2: 'mgr.compute-1.mmkaif'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  3: '-f'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  4: '--setuser'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  5: 'ceph'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  6: '--setgroup'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  7: 'ceph'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  8: '--default-log-to-file=false'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  9: '--default-log-to-journald=true'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 08 09:47:57 compute-1 ceph-mgr[80153]: mgr respawn  exe_path /proc/self/exe
Dec 08 09:47:58 compute-1 sshd-session[81918]: Connection closed by 192.168.122.100 port 53878
Dec 08 09:47:58 compute-1 sshd-session[81915]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:47:58 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Dec 08 09:47:58 compute-1 systemd[1]: session-33.scope: Consumed 5.273s CPU time.
Dec 08 09:47:58 compute-1 systemd-logind[795]: Session 33 logged out. Waiting for processes to exit.
Dec 08 09:47:58 compute-1 systemd-logind[795]: Removed session 33.
Dec 08 09:47:58 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: ignoring --setuser ceph since I am not root
Dec 08 09:47:58 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: ignoring --setgroup ceph since I am not root
Dec 08 09:47:58 compute-1 ceph-mgr[80153]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 08 09:47:58 compute-1 ceph-mgr[80153]: pidfile_write: ignore empty --pid-file
Dec 08 09:47:58 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'alerts'
Dec 08 09:47:58 compute-1 ceph-mgr[80153]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:47:58 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'balancer'
Dec 08 09:47:58 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:58.238+0000 7f97f84f4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:47:58 compute-1 ceph-mon[79846]: 6.9 scrub starts
Dec 08 09:47:58 compute-1 ceph-mon[79846]: 6.9 scrub ok
Dec 08 09:47:58 compute-1 ceph-mon[79846]: 2.b scrub starts
Dec 08 09:47:58 compute-1 ceph-mon[79846]: 2.b scrub ok
Dec 08 09:47:58 compute-1 ceph-mon[79846]: Deploying daemon node-exporter.compute-0 on compute-0
Dec 08 09:47:58 compute-1 ceph-mon[79846]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:47:58 compute-1 ceph-mon[79846]: 3.10 scrub starts
Dec 08 09:47:58 compute-1 ceph-mon[79846]: 3.10 scrub ok
Dec 08 09:47:58 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4114852922' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 08 09:47:58 compute-1 ceph-mon[79846]: mgrmap e18: compute-0.kitiwu(active, since 6s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:47:58 compute-1 ceph-mgr[80153]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:47:58 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'cephadm'
Dec 08 09:47:58 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:58.322+0000 7f97f84f4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:47:58 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Dec 08 09:47:58 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'crash'
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'dashboard'
Dec 08 09:47:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:59.100+0000 7f97f84f4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:47:59 compute-1 ceph-mon[79846]: 6.b scrub starts
Dec 08 09:47:59 compute-1 ceph-mon[79846]: 6.b scrub ok
Dec 08 09:47:59 compute-1 ceph-mon[79846]: 3.1d scrub starts
Dec 08 09:47:59 compute-1 ceph-mon[79846]: 3.1d scrub ok
Dec 08 09:47:59 compute-1 ceph-mon[79846]: 5.16 deep-scrub starts
Dec 08 09:47:59 compute-1 ceph-mon[79846]: 5.16 deep-scrub ok
Dec 08 09:47:59 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3310196236' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 08 09:47:59 compute-1 sshd-session[83216]: Received disconnect from 79.32.212.213 port 52580:11: Bye Bye [preauth]
Dec 08 09:47:59 compute-1 sshd-session[83216]: Disconnected from authenticating user root 79.32.212.213 port 52580 [preauth]
Dec 08 09:47:59 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'devicehealth'
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'diskprediction_local'
Dec 08 09:47:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:59.741+0000 7f97f84f4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:47:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 08 09:47:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 08 09:47:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]:   from numpy import show_config as show_numpy_config
Dec 08 09:47:59 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 08 09:47:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:59.912+0000 7f97f84f4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'influx'
Dec 08 09:47:59 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:47:59 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'insights'
Dec 08 09:47:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:47:59.983+0000 7f97f84f4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:48:00 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'iostat'
Dec 08 09:48:00 compute-1 ceph-mgr[80153]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:48:00 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'k8sevents'
Dec 08 09:48:00 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:00.118+0000 7f97f84f4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:48:00 compute-1 ceph-mon[79846]: 4.17 scrub starts
Dec 08 09:48:00 compute-1 ceph-mon[79846]: 4.17 scrub ok
Dec 08 09:48:00 compute-1 ceph-mon[79846]: 5.1a scrub starts
Dec 08 09:48:00 compute-1 ceph-mon[79846]: 5.1a scrub ok
Dec 08 09:48:00 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3310196236' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 08 09:48:00 compute-1 ceph-mon[79846]: mgrmap e19: compute-0.kitiwu(active, since 8s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:48:00 compute-1 ceph-mon[79846]: 4.16 scrub starts
Dec 08 09:48:00 compute-1 ceph-mon[79846]: 4.16 scrub ok
Dec 08 09:48:00 compute-1 ceph-mon[79846]: 3.f scrub starts
Dec 08 09:48:00 compute-1 ceph-mon[79846]: 3.f scrub ok
Dec 08 09:48:00 compute-1 sshd-session[83213]: Received disconnect from 103.191.92.236 port 45300:11: Bye Bye [preauth]
Dec 08 09:48:00 compute-1 sshd-session[83213]: Disconnected from authenticating user root 103.191.92.236 port 45300 [preauth]
Dec 08 09:48:00 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'localpool'
Dec 08 09:48:00 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mds_autoscaler'
Dec 08 09:48:00 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mirroring'
Dec 08 09:48:00 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'nfs'
Dec 08 09:48:00 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 08 09:48:00 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'orchestrator'
Dec 08 09:48:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:01.093+0000 7f97f84f4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_perf_query'
Dec 08 09:48:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:01.316+0000 7f97f84f4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mon[79846]: 3.9 deep-scrub starts
Dec 08 09:48:01 compute-1 ceph-mon[79846]: 3.9 deep-scrub ok
Dec 08 09:48:01 compute-1 ceph-mon[79846]: 5.17 deep-scrub starts
Dec 08 09:48:01 compute-1 ceph-mon[79846]: 5.17 deep-scrub ok
Dec 08 09:48:01 compute-1 ceph-mon[79846]: 6.15 scrub starts
Dec 08 09:48:01 compute-1 ceph-mon[79846]: 6.15 scrub ok
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_support'
Dec 08 09:48:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:01.386+0000 7f97f84f4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'pg_autoscaler'
Dec 08 09:48:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:01.454+0000 7f97f84f4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'progress'
Dec 08 09:48:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:01.533+0000 7f97f84f4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'prometheus'
Dec 08 09:48:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:01.608+0000 7f97f84f4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 08 09:48:01 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:01.962+0000 7f97f84f4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:48:01 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rbd_support'
Dec 08 09:48:02 compute-1 ceph-mgr[80153]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:48:02 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'restful'
Dec 08 09:48:02 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:02.082+0000 7f97f84f4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:48:02 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rgw'
Dec 08 09:48:02 compute-1 ceph-mon[79846]: 4.2 deep-scrub starts
Dec 08 09:48:02 compute-1 ceph-mon[79846]: 4.2 deep-scrub ok
Dec 08 09:48:02 compute-1 ceph-mon[79846]: 6.14 scrub starts
Dec 08 09:48:02 compute-1 ceph-mon[79846]: 6.14 scrub ok
Dec 08 09:48:02 compute-1 ceph-mon[79846]: 5.9 scrub starts
Dec 08 09:48:02 compute-1 ceph-mon[79846]: 5.9 scrub ok
Dec 08 09:48:02 compute-1 ceph-mgr[80153]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:48:02 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rook'
Dec 08 09:48:02 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:02.528+0000 7f97f84f4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:48:02 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 08 09:48:02 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'selftest'
Dec 08 09:48:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:03.110+0000 7f97f84f4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'snap_schedule'
Dec 08 09:48:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:03.182+0000 7f97f84f4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'stats'
Dec 08 09:48:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:03.268+0000 7f97f84f4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'status'
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telegraf'
Dec 08 09:48:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:03.417+0000 7f97f84f4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telemetry'
Dec 08 09:48:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:03.490+0000 7f97f84f4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mon[79846]: 5.0 scrub starts
Dec 08 09:48:03 compute-1 ceph-mon[79846]: 5.0 scrub ok
Dec 08 09:48:03 compute-1 ceph-mon[79846]: 3.12 scrub starts
Dec 08 09:48:03 compute-1 ceph-mon[79846]: 3.12 scrub ok
Dec 08 09:48:03 compute-1 ceph-mon[79846]: 3.c scrub starts
Dec 08 09:48:03 compute-1 ceph-mon[79846]: 3.c scrub ok
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'test_orchestrator'
Dec 08 09:48:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:03.653+0000 7f97f84f4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 08 09:48:03 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:48:03 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'volumes'
Dec 08 09:48:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:03.884+0000 7f97f84f4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'zabbix'
Dec 08 09:48:04 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:04.161+0000 7f97f84f4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:04.227+0000 7f97f84f4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: ms_deliver_dispatch: unhandled message 0x5651ab623860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  1: '-n'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  2: 'mgr.compute-1.mmkaif'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  3: '-f'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  4: '--setuser'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  5: 'ceph'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  6: '--setgroup'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  7: 'ceph'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  8: '--default-log-to-file=false'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  9: '--default-log-to-journald=true'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr respawn  exe_path /proc/self/exe
Dec 08 09:48:04 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Dec 08 09:48:04 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: ignoring --setuser ceph since I am not root
Dec 08 09:48:04 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: ignoring --setgroup ceph since I am not root
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: pidfile_write: ignore empty --pid-file
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'alerts'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:04.468+0000 7fee4086a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'balancer'
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'cephadm'
Dec 08 09:48:04 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:04.549+0000 7fee4086a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:48:04 compute-1 ceph-mon[79846]: 4.6 scrub starts
Dec 08 09:48:04 compute-1 ceph-mon[79846]: 4.6 scrub ok
Dec 08 09:48:04 compute-1 ceph-mon[79846]: 5.14 scrub starts
Dec 08 09:48:04 compute-1 ceph-mon[79846]: 5.14 scrub ok
Dec 08 09:48:04 compute-1 ceph-mon[79846]: 6.8 scrub starts
Dec 08 09:48:04 compute-1 ceph-mon[79846]: 6.8 scrub ok
Dec 08 09:48:04 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif restarted
Dec 08 09:48:04 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif started
Dec 08 09:48:04 compute-1 ceph-mon[79846]: Active manager daemon compute-0.kitiwu restarted
Dec 08 09:48:04 compute-1 ceph-mon[79846]: Activating manager daemon compute-0.kitiwu
Dec 08 09:48:04 compute-1 ceph-mon[79846]: osdmap e44: 3 total, 3 up, 3 in
Dec 08 09:48:04 compute-1 ceph-mon[79846]: mgrmap e20: compute-0.kitiwu(active, starting, since 0.0321847s), standbys: compute-2.zqytsv, compute-1.mmkaif
Dec 08 09:48:04 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv restarted
Dec 08 09:48:04 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv started
Dec 08 09:48:04 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 08 09:48:04 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:04 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 08 09:48:05 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'crash'
Dec 08 09:48:05 compute-1 ceph-mgr[80153]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:48:05 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'dashboard'
Dec 08 09:48:05 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:05.317+0000 7fee4086a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:48:05 compute-1 ceph-mon[79846]: 4.3 scrub starts
Dec 08 09:48:05 compute-1 ceph-mon[79846]: 4.3 scrub ok
Dec 08 09:48:05 compute-1 ceph-mon[79846]: 6.16 scrub starts
Dec 08 09:48:05 compute-1 ceph-mon[79846]: 6.16 scrub ok
Dec 08 09:48:05 compute-1 ceph-mon[79846]: 6.a scrub starts
Dec 08 09:48:05 compute-1 ceph-mon[79846]: 6.a scrub ok
Dec 08 09:48:05 compute-1 ceph-mon[79846]: mgrmap e21: compute-0.kitiwu(active, starting, since 1.12235s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:48:05 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Dec 08 09:48:05 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'devicehealth'
Dec 08 09:48:05 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Dec 08 09:48:05 compute-1 ceph-mgr[80153]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:48:05 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'diskprediction_local'
Dec 08 09:48:05 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:05.934+0000 7fee4086a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:48:06 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 08 09:48:06 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 08 09:48:06 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]:   from numpy import show_config as show_numpy_config
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:48:06 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:06.097+0000 7fee4086a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'influx'
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'insights'
Dec 08 09:48:06 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:06.170+0000 7fee4086a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'iostat'
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'k8sevents'
Dec 08 09:48:06 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:06.299+0000 7fee4086a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:48:06 compute-1 ceph-mon[79846]: 3.1b scrub starts
Dec 08 09:48:06 compute-1 ceph-mon[79846]: 3.1b scrub ok
Dec 08 09:48:06 compute-1 ceph-mon[79846]: 6.11 scrub starts
Dec 08 09:48:06 compute-1 ceph-mon[79846]: 6.11 scrub ok
Dec 08 09:48:06 compute-1 ceph-mon[79846]: 3.a deep-scrub starts
Dec 08 09:48:06 compute-1 ceph-mon[79846]: 3.a deep-scrub ok
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'localpool'
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mds_autoscaler'
Dec 08 09:48:06 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 08 09:48:06 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 08 09:48:06 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mirroring'
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'nfs'
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'orchestrator'
Dec 08 09:48:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:07.308+0000 7fee4086a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_perf_query'
Dec 08 09:48:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:07.555+0000 7fee4086a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mon[79846]: 4.1d scrub starts
Dec 08 09:48:07 compute-1 ceph-mon[79846]: 4.1d scrub ok
Dec 08 09:48:07 compute-1 ceph-mon[79846]: 6.10 scrub starts
Dec 08 09:48:07 compute-1 ceph-mon[79846]: 6.10 scrub ok
Dec 08 09:48:07 compute-1 ceph-mon[79846]: 4.a scrub starts
Dec 08 09:48:07 compute-1 ceph-mon[79846]: 4.a scrub ok
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:07.630+0000 7fee4086a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_support'
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'pg_autoscaler'
Dec 08 09:48:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:07.697+0000 7fee4086a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'progress'
Dec 08 09:48:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:07.775+0000 7fee4086a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'prometheus'
Dec 08 09:48:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:07.849+0000 7fee4086a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:48:07 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 08 09:48:07 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 08 09:48:08 compute-1 systemd[1]: Stopping User Manager for UID 42477...
Dec 08 09:48:08 compute-1 systemd[72606]: Activating special unit Exit the Session...
Dec 08 09:48:08 compute-1 systemd[72606]: Stopped target Main User Target.
Dec 08 09:48:08 compute-1 systemd[72606]: Stopped target Basic System.
Dec 08 09:48:08 compute-1 systemd[72606]: Stopped target Paths.
Dec 08 09:48:08 compute-1 systemd[72606]: Stopped target Sockets.
Dec 08 09:48:08 compute-1 systemd[72606]: Stopped target Timers.
Dec 08 09:48:08 compute-1 systemd[72606]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 08 09:48:08 compute-1 systemd[72606]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 08 09:48:08 compute-1 systemd[72606]: Closed D-Bus User Message Bus Socket.
Dec 08 09:48:08 compute-1 systemd[72606]: Stopped Create User's Volatile Files and Directories.
Dec 08 09:48:08 compute-1 systemd[72606]: Removed slice User Application Slice.
Dec 08 09:48:08 compute-1 systemd[72606]: Reached target Shutdown.
Dec 08 09:48:08 compute-1 systemd[72606]: Finished Exit the Session.
Dec 08 09:48:08 compute-1 systemd[72606]: Reached target Exit the Session.
Dec 08 09:48:08 compute-1 systemd[1]: user@42477.service: Deactivated successfully.
Dec 08 09:48:08 compute-1 systemd[1]: Stopped User Manager for UID 42477.
Dec 08 09:48:08 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 08 09:48:08 compute-1 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 08 09:48:08 compute-1 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 08 09:48:08 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 08 09:48:08 compute-1 systemd[1]: Removed slice User Slice of UID 42477.
Dec 08 09:48:08 compute-1 systemd[1]: user-42477.slice: Consumed 1min 12.475s CPU time.
Dec 08 09:48:08 compute-1 ceph-mgr[80153]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:48:08 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rbd_support'
Dec 08 09:48:08 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:08.209+0000 7fee4086a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:48:08 compute-1 ceph-mgr[80153]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:48:08 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'restful'
Dec 08 09:48:08 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:08.305+0000 7fee4086a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:48:08 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rgw'
Dec 08 09:48:08 compute-1 ceph-mon[79846]: 4.1c scrub starts
Dec 08 09:48:08 compute-1 ceph-mon[79846]: 4.1c scrub ok
Dec 08 09:48:08 compute-1 ceph-mon[79846]: 4.11 deep-scrub starts
Dec 08 09:48:08 compute-1 ceph-mon[79846]: 4.11 deep-scrub ok
Dec 08 09:48:08 compute-1 ceph-mon[79846]: 3.d scrub starts
Dec 08 09:48:08 compute-1 ceph-mon[79846]: 3.d scrub ok
Dec 08 09:48:08 compute-1 ceph-mgr[80153]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:48:08 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:08.726+0000 7fee4086a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:48:08 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rook'
Dec 08 09:48:08 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 08 09:48:08 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'selftest'
Dec 08 09:48:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:09.291+0000 7fee4086a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'snap_schedule'
Dec 08 09:48:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:09.361+0000 7fee4086a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'stats'
Dec 08 09:48:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:09.439+0000 7fee4086a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'status'
Dec 08 09:48:09 compute-1 ceph-mon[79846]: 6.1b deep-scrub starts
Dec 08 09:48:09 compute-1 ceph-mon[79846]: 6.1b deep-scrub ok
Dec 08 09:48:09 compute-1 ceph-mon[79846]: 6.13 scrub starts
Dec 08 09:48:09 compute-1 ceph-mon[79846]: 6.13 scrub ok
Dec 08 09:48:09 compute-1 ceph-mon[79846]: 6.7 scrub starts
Dec 08 09:48:09 compute-1 ceph-mon[79846]: 6.7 scrub ok
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telegraf'
Dec 08 09:48:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:09.587+0000 7fee4086a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:09.661+0000 7fee4086a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telemetry'
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:09.815+0000 7fee4086a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:48:09 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'test_orchestrator'
Dec 08 09:48:09 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:09 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 08 09:48:09 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'volumes'
Dec 08 09:48:10 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:10.043+0000 7fee4086a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'zabbix'
Dec 08 09:48:10 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:10.303+0000 7fee4086a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:48:10 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:48:10.371+0000 7fee4086a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: mgr load Constructed class from module: dashboard
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: ms_deliver_dispatch: unhandled message 0x557f3b7ff860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: [dashboard INFO root] Starting engine...
Dec 08 09:48:10 compute-1 ceph-mgr[80153]: [dashboard INFO root] Engine started...
Dec 08 09:48:10 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Dec 08 09:48:10 compute-1 ceph-mon[79846]: 3.0 scrub starts
Dec 08 09:48:10 compute-1 ceph-mon[79846]: 3.0 scrub ok
Dec 08 09:48:10 compute-1 ceph-mon[79846]: 4.10 deep-scrub starts
Dec 08 09:48:10 compute-1 ceph-mon[79846]: 4.10 deep-scrub ok
Dec 08 09:48:10 compute-1 ceph-mon[79846]: 4.5 scrub starts
Dec 08 09:48:10 compute-1 ceph-mon[79846]: 4.5 scrub ok
Dec 08 09:48:10 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif restarted
Dec 08 09:48:10 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif started
Dec 08 09:48:10 compute-1 ceph-mon[79846]: Active manager daemon compute-0.kitiwu restarted
Dec 08 09:48:10 compute-1 ceph-mon[79846]: Activating manager daemon compute-0.kitiwu
Dec 08 09:48:10 compute-1 ceph-mon[79846]: osdmap e45: 3 total, 3 up, 3 in
Dec 08 09:48:10 compute-1 ceph-mon[79846]: mgrmap e22: compute-0.kitiwu(active, starting, since 0.0355604s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv restarted
Dec 08 09:48:10 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv started
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 08 09:48:10 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 08 09:48:10 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 08 09:48:11 compute-1 sshd-session[83262]: Accepted publickey for ceph-admin from 192.168.122.100 port 54590 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:48:11 compute-1 systemd-logind[795]: New session 34 of user ceph-admin.
Dec 08 09:48:11 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Dec 08 09:48:11 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 08 09:48:11 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 08 09:48:11 compute-1 systemd[1]: Starting User Manager for UID 42477...
Dec 08 09:48:11 compute-1 systemd[83266]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:48:11 compute-1 systemd[83266]: Queued start job for default target Main User Target.
Dec 08 09:48:11 compute-1 systemd[83266]: Created slice User Application Slice.
Dec 08 09:48:11 compute-1 systemd[83266]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 08 09:48:11 compute-1 systemd[83266]: Started Daily Cleanup of User's Temporary Directories.
Dec 08 09:48:11 compute-1 systemd[83266]: Reached target Paths.
Dec 08 09:48:11 compute-1 systemd[83266]: Reached target Timers.
Dec 08 09:48:11 compute-1 systemd[83266]: Starting D-Bus User Message Bus Socket...
Dec 08 09:48:11 compute-1 systemd[83266]: Starting Create User's Volatile Files and Directories...
Dec 08 09:48:11 compute-1 systemd[83266]: Listening on D-Bus User Message Bus Socket.
Dec 08 09:48:11 compute-1 systemd[83266]: Reached target Sockets.
Dec 08 09:48:11 compute-1 systemd[83266]: Finished Create User's Volatile Files and Directories.
Dec 08 09:48:11 compute-1 systemd[83266]: Reached target Basic System.
Dec 08 09:48:11 compute-1 systemd[83266]: Reached target Main User Target.
Dec 08 09:48:11 compute-1 systemd[83266]: Startup finished in 153ms.
Dec 08 09:48:11 compute-1 systemd[1]: Started User Manager for UID 42477.
Dec 08 09:48:11 compute-1 systemd[1]: Started Session 34 of User ceph-admin.
Dec 08 09:48:11 compute-1 sshd-session[83262]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:48:11 compute-1 sudo[83282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:48:11 compute-1 sudo[83282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:11 compute-1 sudo[83282]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:11 compute-1 ceph-mon[79846]: 3.8 scrub starts
Dec 08 09:48:11 compute-1 ceph-mon[79846]: 3.8 scrub ok
Dec 08 09:48:11 compute-1 ceph-mon[79846]: Manager daemon compute-0.kitiwu is now available
Dec 08 09:48:11 compute-1 ceph-mon[79846]: 5.1e scrub starts
Dec 08 09:48:11 compute-1 ceph-mon[79846]: 5.1e scrub ok
Dec 08 09:48:11 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec 08 09:48:11 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec 08 09:48:11 compute-1 ceph-mon[79846]: 5.2 scrub starts
Dec 08 09:48:11 compute-1 ceph-mon[79846]: 5.2 scrub ok
Dec 08 09:48:11 compute-1 ceph-mon[79846]: mgrmap e23: compute-0.kitiwu(active, since 1.07357s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:48:11 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e2 new map
Dec 08 09:48:11 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           btime 2025-12-08T09:48:11:623626+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:11.623571+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 08 09:48:11 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Dec 08 09:48:11 compute-1 sudo[83307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 08 09:48:11 compute-1 sudo[83307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:11 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 08 09:48:11 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 08 09:48:12 compute-1 podman[83402]: 2025-12-08 09:48:12.268034702 +0000 UTC m=+0.064441812 container exec 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 08 09:48:12 compute-1 podman[83402]: 2025-12-08 09:48:12.363355719 +0000 UTC m=+0.159762829 container exec_died 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 08 09:48:12 compute-1 ceph-mon[79846]: 2.a scrub starts
Dec 08 09:48:12 compute-1 ceph-mon[79846]: 2.a scrub ok
Dec 08 09:48:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 08 09:48:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 08 09:48:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 08 09:48:12 compute-1 ceph-mon[79846]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 08 09:48:12 compute-1 ceph-mon[79846]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 08 09:48:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 08 09:48:12 compute-1 ceph-mon[79846]: osdmap e46: 3 total, 3 up, 3 in
Dec 08 09:48:12 compute-1 ceph-mon[79846]: fsmap cephfs:0
Dec 08 09:48:12 compute-1 ceph-mon[79846]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 08 09:48:12 compute-1 ceph-mon[79846]: 3.19 scrub starts
Dec 08 09:48:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:12 compute-1 ceph-mon[79846]: 3.19 scrub ok
Dec 08 09:48:12 compute-1 ceph-mon[79846]: 5.7 scrub starts
Dec 08 09:48:12 compute-1 ceph-mon[79846]: 5.7 scrub ok
Dec 08 09:48:12 compute-1 ceph-mon[79846]: [08/Dec/2025:09:48:12] ENGINE Bus STARTING
Dec 08 09:48:12 compute-1 ceph-mon[79846]: [08/Dec/2025:09:48:12] ENGINE Serving on http://192.168.122.100:8765
Dec 08 09:48:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:12 compute-1 sudo[83307]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:12 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 08 09:48:12 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 08 09:48:12 compute-1 sudo[83510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:48:12 compute-1 sudo[83510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:12 compute-1 sudo[83510]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:13 compute-1 sudo[83535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 08 09:48:13 compute-1 sudo[83535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:13 compute-1 sudo[83535]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:13 compute-1 ceph-mon[79846]: 2.d scrub starts
Dec 08 09:48:13 compute-1 ceph-mon[79846]: 2.d scrub ok
Dec 08 09:48:13 compute-1 ceph-mon[79846]: [08/Dec/2025:09:48:12] ENGINE Serving on https://192.168.122.100:7150
Dec 08 09:48:13 compute-1 ceph-mon[79846]: [08/Dec/2025:09:48:12] ENGINE Bus STARTED
Dec 08 09:48:13 compute-1 ceph-mon[79846]: [08/Dec/2025:09:48:12] ENGINE Client ('192.168.122.100', 54030) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 08 09:48:13 compute-1 ceph-mon[79846]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:48:13 compute-1 ceph-mon[79846]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 08 09:48:13 compute-1 ceph-mon[79846]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 08 09:48:13 compute-1 ceph-mon[79846]: 2.6 scrub starts
Dec 08 09:48:13 compute-1 ceph-mon[79846]: 2.6 scrub ok
Dec 08 09:48:13 compute-1 ceph-mon[79846]: mgrmap e24: compute-0.kitiwu(active, since 2s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:48:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:13 compute-1 ceph-mon[79846]: 4.d scrub starts
Dec 08 09:48:13 compute-1 ceph-mon[79846]: 4.d scrub ok
Dec 08 09:48:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 08 09:48:13 compute-1 sudo[83591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:48:13 compute-1 sudo[83591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:13 compute-1 sudo[83591]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:13 compute-1 sudo[83616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 08 09:48:13 compute-1 sudo[83616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:13 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 08 09:48:13 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 08 09:48:13 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Dec 08 09:48:13 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 47 pg[12.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:48:14 compute-1 sudo[83616]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 sudo[83658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:48:14 compute-1 sudo[83658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83658]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 sudo[83683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:48:14 compute-1 sudo[83683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83683]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 sudo[83708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:48:14 compute-1 sudo[83708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83708]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 sudo[83733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:48:14 compute-1 sudo[83733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83733]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 sudo[83758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:48:14 compute-1 sudo[83758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83758]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 sudo[83806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:48:14 compute-1 sudo[83806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83806]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 ceph-mon[79846]: 5.b scrub starts
Dec 08 09:48:14 compute-1 ceph-mon[79846]: 5.b scrub ok
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 08 09:48:14 compute-1 ceph-mon[79846]: 2.4 scrub starts
Dec 08 09:48:14 compute-1 ceph-mon[79846]: 2.4 scrub ok
Dec 08 09:48:14 compute-1 ceph-mon[79846]: 3.3 scrub starts
Dec 08 09:48:14 compute-1 ceph-mon[79846]: 3.3 scrub ok
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 08 09:48:14 compute-1 ceph-mon[79846]: osdmap e47: 3 total, 3 up, 3 in
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 08 09:48:14 compute-1 ceph-mon[79846]: 5.8 scrub starts
Dec 08 09:48:14 compute-1 ceph-mon[79846]: 5.8 scrub ok
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 08 09:48:14 compute-1 ceph-mon[79846]: Updating compute-0:/etc/ceph/ceph.conf
Dec 08 09:48:14 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.conf
Dec 08 09:48:14 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.conf
Dec 08 09:48:14 compute-1 sudo[83831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:48:14 compute-1 sudo[83831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83831]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:14 compute-1 sudo[83856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 08 09:48:14 compute-1 sudo[83856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83856]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 08 09:48:14 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 08 09:48:14 compute-1 sudo[83881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:48:14 compute-1 sudo[83881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:14 compute-1 sudo[83881]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:14 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Dec 08 09:48:14 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 48 pg[12.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:48:15 compute-1 sudo[83906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:48:15 compute-1 sudo[83906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[83906]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[83931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:48:15 compute-1 sudo[83931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[83931]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[83956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:48:15 compute-1 sudo[83956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[83956]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[83981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:48:15 compute-1 sudo[83981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[83981]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[84029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:48:15 compute-1 sudo[84029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[84029]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[84054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:48:15 compute-1 sudo[84054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[84054]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[84079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:48:15 compute-1 sudo[84079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[84079]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[84104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:48:15 compute-1 sudo[84104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[84104]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[84129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:48:15 compute-1 sudo[84129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[84129]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[84154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:48:15 compute-1 sudo[84154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[84154]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 sudo[84179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:48:15 compute-1 sudo[84179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[84179]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 08 09:48:15 compute-1 sudo[84204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:48:15 compute-1 sudo[84204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:15 compute-1 sudo[84204]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:15 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 08 09:48:15 compute-1 ceph-mon[79846]: pgmap v7: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:48:15 compute-1 ceph-mon[79846]: 2.e scrub starts
Dec 08 09:48:15 compute-1 ceph-mon[79846]: 2.e scrub ok
Dec 08 09:48:15 compute-1 ceph-mon[79846]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:48:15 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:48:15 compute-1 ceph-mon[79846]: 5.1 scrub starts
Dec 08 09:48:15 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:48:15 compute-1 ceph-mon[79846]: 5.1 scrub ok
Dec 08 09:48:15 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 08 09:48:15 compute-1 ceph-mon[79846]: osdmap e48: 3 total, 3 up, 3 in
Dec 08 09:48:15 compute-1 ceph-mon[79846]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 08 09:48:15 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:15 compute-1 ceph-mon[79846]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 08 09:48:15 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:15 compute-1 ceph-mon[79846]: 2.c scrub starts
Dec 08 09:48:15 compute-1 ceph-mon[79846]: 2.c scrub ok
Dec 08 09:48:15 compute-1 ceph-mon[79846]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 08 09:48:15 compute-1 ceph-mon[79846]: mgrmap e25: compute-0.kitiwu(active, since 4s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:48:15 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Dec 08 09:48:16 compute-1 sudo[84252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:48:16 compute-1 sudo[84252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84252]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:48:16 compute-1 sudo[84277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84277]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 08 09:48:16 compute-1 sudo[84302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84302]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:48:16 compute-1 sudo[84327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84327]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:48:16 compute-1 sudo[84352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84352]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:48:16 compute-1 sudo[84377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84377]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:48:16 compute-1 sudo[84402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84402]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:48:16 compute-1 sudo[84427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84427]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:48:16 compute-1 sudo[84475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84475]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:48:16 compute-1 sudo[84500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84500]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 sudo[84525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:48:16 compute-1 sudo[84525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84525]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:16 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 08 09:48:16 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 08 09:48:16 compute-1 sudo[84550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:48:16 compute-1 sudo[84550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:16 compute-1 sudo[84550]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:17 compute-1 ceph-mon[79846]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:48:17 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:48:17 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:48:17 compute-1 ceph-mon[79846]: 2.19 scrub starts
Dec 08 09:48:17 compute-1 ceph-mon[79846]: 2.19 scrub ok
Dec 08 09:48:17 compute-1 ceph-mon[79846]: 3.5 scrub starts
Dec 08 09:48:17 compute-1 ceph-mon[79846]: 3.5 scrub ok
Dec 08 09:48:17 compute-1 ceph-mon[79846]: osdmap e49: 3 total, 3 up, 3 in
Dec 08 09:48:17 compute-1 ceph-mon[79846]: 2.10 scrub starts
Dec 08 09:48:17 compute-1 ceph-mon[79846]: 2.10 scrub ok
Dec 08 09:48:17 compute-1 ceph-mon[79846]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:48:17 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:48:17 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:48:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:17 compute-1 sudo[84575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:48:17 compute-1 sudo[84575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:17 compute-1 systemd[1]: Reloading.
Dec 08 09:48:17 compute-1 systemd-rc-local-generator[84660]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:48:17 compute-1 systemd-sysv-generator[84667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:48:17 compute-1 systemd[1]: Reloading.
Dec 08 09:48:17 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 08 09:48:17 compute-1 systemd-rc-local-generator[84699]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:48:17 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 08 09:48:17 compute-1 systemd-sysv-generator[84702]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:48:18 compute-1 ceph-mon[79846]: pgmap v10: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:48:18 compute-1 ceph-mon[79846]: 7.1b scrub starts
Dec 08 09:48:18 compute-1 ceph-mon[79846]: 7.1b scrub ok
Dec 08 09:48:18 compute-1 ceph-mon[79846]: 5.f scrub starts
Dec 08 09:48:18 compute-1 ceph-mon[79846]: 5.f scrub ok
Dec 08 09:48:18 compute-1 ceph-mon[79846]: Deploying daemon node-exporter.compute-1 on compute-1
Dec 08 09:48:18 compute-1 ceph-mon[79846]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 08 09:48:18 compute-1 ceph-mon[79846]: 2.13 scrub starts
Dec 08 09:48:18 compute-1 ceph-mon[79846]: 2.13 scrub ok
Dec 08 09:48:18 compute-1 ceph-mon[79846]: mgrmap e26: compute-0.kitiwu(active, since 6s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:48:18 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:48:18 compute-1 bash[84760]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec 08 09:48:18 compute-1 bash[84760]: Getting image source signatures
Dec 08 09:48:18 compute-1 bash[84760]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec 08 09:48:18 compute-1 bash[84760]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec 08 09:48:18 compute-1 bash[84760]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec 08 09:48:18 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 08 09:48:18 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 08 09:48:19 compute-1 ceph-mon[79846]: 7.18 scrub starts
Dec 08 09:48:19 compute-1 ceph-mon[79846]: 7.18 scrub ok
Dec 08 09:48:19 compute-1 ceph-mon[79846]: 4.e scrub starts
Dec 08 09:48:19 compute-1 ceph-mon[79846]: 4.e scrub ok
Dec 08 09:48:19 compute-1 ceph-mon[79846]: 4.14 scrub starts
Dec 08 09:48:19 compute-1 ceph-mon[79846]: 4.14 scrub ok
Dec 08 09:48:19 compute-1 bash[84760]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec 08 09:48:19 compute-1 bash[84760]: Writing manifest to image destination
Dec 08 09:48:19 compute-1 podman[84760]: 2025-12-08 09:48:19.380061981 +0000 UTC m=+1.105662586 container create 2fda3b355cd40eb61e8d8918a072c9229da4506f505876d6ee0a23fb8c342813 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 08 09:48:19 compute-1 podman[84760]: 2025-12-08 09:48:19.365164263 +0000 UTC m=+1.090764888 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 08 09:48:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d93b9e92dea35546c1255b5aeccc57e8f1efeb4e12cf372afa4f3c54a756248/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:19 compute-1 podman[84760]: 2025-12-08 09:48:19.449203226 +0000 UTC m=+1.174803851 container init 2fda3b355cd40eb61e8d8918a072c9229da4506f505876d6ee0a23fb8c342813 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 08 09:48:19 compute-1 podman[84760]: 2025-12-08 09:48:19.458596186 +0000 UTC m=+1.184196791 container start 2fda3b355cd40eb61e8d8918a072c9229da4506f505876d6ee0a23fb8c342813 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 08 09:48:19 compute-1 bash[84760]: 2fda3b355cd40eb61e8d8918a072c9229da4506f505876d6ee0a23fb8c342813
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.469Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.469Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.470Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.471Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.472Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.472Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=arp
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=bcache
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=bonding
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=cpu
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=dmi
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=edac
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=entropy
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=filefd
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=netclass
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=netdev
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=netstat
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=nfs
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=nvme
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=os
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=pressure
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=rapl
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=selinux
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=softnet
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=stat
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=textfile
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=time
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=uname
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=xfs
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.474Z caller=node_exporter.go:117 level=info collector=zfs
Dec 08 09:48:19 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.476Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 08 09:48:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1[84837]: ts=2025-12-08T09:48:19.476Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 08 09:48:19 compute-1 sudo[84575]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:19 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:19 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 08 09:48:19 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 08 09:48:20 compute-1 ceph-mon[79846]: pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 14 op/s
Dec 08 09:48:20 compute-1 ceph-mon[79846]: 7.1e scrub starts
Dec 08 09:48:20 compute-1 ceph-mon[79846]: 7.1e scrub ok
Dec 08 09:48:20 compute-1 ceph-mon[79846]: 4.c scrub starts
Dec 08 09:48:20 compute-1 ceph-mon[79846]: 4.c scrub ok
Dec 08 09:48:20 compute-1 ceph-mon[79846]: 2.15 scrub starts
Dec 08 09:48:20 compute-1 ceph-mon[79846]: 2.15 scrub ok
Dec 08 09:48:20 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:20 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:20 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:20 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3367952435' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 08 09:48:20 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3367952435' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 08 09:48:20 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 08 09:48:20 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 08 09:48:21 compute-1 ceph-mon[79846]: Deploying daemon node-exporter.compute-2 on compute-2
Dec 08 09:48:21 compute-1 ceph-mon[79846]: 7.6 scrub starts
Dec 08 09:48:21 compute-1 ceph-mon[79846]: 7.6 scrub ok
Dec 08 09:48:21 compute-1 ceph-mon[79846]: 5.1c scrub starts
Dec 08 09:48:21 compute-1 ceph-mon[79846]: 5.1c scrub ok
Dec 08 09:48:21 compute-1 ceph-mon[79846]: 5.d scrub starts
Dec 08 09:48:21 compute-1 ceph-mon[79846]: 5.d scrub ok
Dec 08 09:48:21 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 08 09:48:21 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 08 09:48:22 compute-1 ceph-mon[79846]: pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 08 09:48:22 compute-1 ceph-mon[79846]: 7.2 scrub starts
Dec 08 09:48:22 compute-1 ceph-mon[79846]: 7.2 scrub ok
Dec 08 09:48:22 compute-1 ceph-mon[79846]: 5.1b scrub starts
Dec 08 09:48:22 compute-1 ceph-mon[79846]: 5.1b scrub ok
Dec 08 09:48:22 compute-1 ceph-mon[79846]: 5.12 scrub starts
Dec 08 09:48:22 compute-1 ceph-mon[79846]: 5.12 scrub ok
Dec 08 09:48:22 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/4175476984' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 08 09:48:22 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 08 09:48:22 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 08 09:48:23 compute-1 ceph-mon[79846]: 7.3 scrub starts
Dec 08 09:48:23 compute-1 ceph-mon[79846]: 7.3 scrub ok
Dec 08 09:48:23 compute-1 ceph-mon[79846]: 3.1c scrub starts
Dec 08 09:48:23 compute-1 ceph-mon[79846]: 3.1c scrub ok
Dec 08 09:48:23 compute-1 ceph-mon[79846]: 5.13 deep-scrub starts
Dec 08 09:48:23 compute-1 ceph-mon[79846]: 5.13 deep-scrub ok
Dec 08 09:48:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 08 09:48:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 08 09:48:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:23 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1179970966' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 08 09:48:23 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 08 09:48:23 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 08 09:48:24 compute-1 ceph-mon[79846]: pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 255 B/s wr, 14 op/s
Dec 08 09:48:24 compute-1 ceph-mon[79846]: 7.4 deep-scrub starts
Dec 08 09:48:24 compute-1 ceph-mon[79846]: 7.4 deep-scrub ok
Dec 08 09:48:24 compute-1 ceph-mon[79846]: 4.1b scrub starts
Dec 08 09:48:24 compute-1 ceph-mon[79846]: 4.1b scrub ok
Dec 08 09:48:24 compute-1 ceph-mon[79846]: 4.19 scrub starts
Dec 08 09:48:24 compute-1 ceph-mon[79846]: 4.19 scrub ok
Dec 08 09:48:24 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3067774280' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 08 09:48:24 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:24 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 08 09:48:24 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 08 09:48:25 compute-1 ceph-mon[79846]: 7.e scrub starts
Dec 08 09:48:25 compute-1 ceph-mon[79846]: 7.e scrub ok
Dec 08 09:48:25 compute-1 ceph-mon[79846]: 4.18 scrub starts
Dec 08 09:48:25 compute-1 ceph-mon[79846]: 4.18 scrub ok
Dec 08 09:48:25 compute-1 ceph-mon[79846]: 6.1 deep-scrub starts
Dec 08 09:48:25 compute-1 ceph-mon[79846]: 6.1 deep-scrub ok
Dec 08 09:48:25 compute-1 ceph-mon[79846]: 5.18 scrub starts
Dec 08 09:48:25 compute-1 ceph-mon[79846]: 5.18 scrub ok
Dec 08 09:48:25 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 08 09:48:25 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 08 09:48:26 compute-1 ceph-mon[79846]: pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 212 B/s wr, 12 op/s
Dec 08 09:48:26 compute-1 ceph-mon[79846]: 7.f scrub starts
Dec 08 09:48:26 compute-1 ceph-mon[79846]: 7.f scrub ok
Dec 08 09:48:26 compute-1 ceph-mon[79846]: 7.a scrub starts
Dec 08 09:48:26 compute-1 ceph-mon[79846]: 7.a scrub ok
Dec 08 09:48:26 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:26 compute-1 ceph-mon[79846]: 4.1a scrub starts
Dec 08 09:48:26 compute-1 ceph-mon[79846]: 4.1a scrub ok
Dec 08 09:48:26 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Dec 08 09:48:26 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Dec 08 09:48:27 compute-1 ceph-mon[79846]: 7.8 scrub starts
Dec 08 09:48:27 compute-1 ceph-mon[79846]: 7.8 scrub ok
Dec 08 09:48:27 compute-1 ceph-mon[79846]: 7.14 scrub starts
Dec 08 09:48:27 compute-1 ceph-mon[79846]: 7.14 scrub ok
Dec 08 09:48:27 compute-1 ceph-mon[79846]: from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 08 09:48:27 compute-1 ceph-mon[79846]: 6.5 deep-scrub starts
Dec 08 09:48:27 compute-1 ceph-mon[79846]: 6.5 deep-scrub ok
Dec 08 09:48:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.hhmzvb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 08 09:48:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.hhmzvb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 08 09:48:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:27 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 08 09:48:27 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 08 09:48:28 compute-1 ceph-mon[79846]: pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 193 B/s wr, 11 op/s
Dec 08 09:48:28 compute-1 ceph-mon[79846]: 7.9 scrub starts
Dec 08 09:48:28 compute-1 ceph-mon[79846]: 7.9 scrub ok
Dec 08 09:48:28 compute-1 ceph-mon[79846]: 7.1d deep-scrub starts
Dec 08 09:48:28 compute-1 ceph-mon[79846]: 7.1d deep-scrub ok
Dec 08 09:48:28 compute-1 ceph-mon[79846]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 08 09:48:28 compute-1 ceph-mon[79846]: Deploying daemon mds.cephfs.compute-2.hhmzvb on compute-2
Dec 08 09:48:28 compute-1 ceph-mon[79846]: 6.3 scrub starts
Dec 08 09:48:28 compute-1 ceph-mon[79846]: 6.3 scrub ok
Dec 08 09:48:28 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 08 09:48:28 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 08 09:48:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e3 new map
Dec 08 09:48:29 compute-1 ceph-mon[79846]: 7.b scrub starts
Dec 08 09:48:29 compute-1 ceph-mon[79846]: 7.b scrub ok
Dec 08 09:48:29 compute-1 ceph-mon[79846]: from='client.14610 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 08 09:48:29 compute-1 ceph-mon[79846]: 6.2 scrub starts
Dec 08 09:48:29 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:29 compute-1 ceph-mon[79846]: 6.2 scrub ok
Dec 08 09:48:29 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:29 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:29 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ywanut", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 08 09:48:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           btime 2025-12-08T09:48:29:301156+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:11.623571+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.hhmzvb{-1:24232} state up:standby seq 1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
Dec 08 09:48:29 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ywanut", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 08 09:48:29 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e4 new map
Dec 08 09:48:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           btime 2025-12-08T09:48:29:331502+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:29.331497+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24232}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.hhmzvb{0:24232} state up:creating seq 1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 08 09:48:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:29 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 08 09:48:29 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 08 09:48:30 compute-1 ceph-mon[79846]: pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 170 B/s wr, 9 op/s
Dec 08 09:48:30 compute-1 ceph-mon[79846]: 7.10 scrub starts
Dec 08 09:48:30 compute-1 ceph-mon[79846]: 7.10 scrub ok
Dec 08 09:48:30 compute-1 ceph-mon[79846]: Deploying daemon mds.cephfs.compute-0.ywanut on compute-0
Dec 08 09:48:30 compute-1 ceph-mon[79846]: mds.? [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] up:boot
Dec 08 09:48:30 compute-1 ceph-mon[79846]: daemon mds.cephfs.compute-2.hhmzvb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 08 09:48:30 compute-1 ceph-mon[79846]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 08 09:48:30 compute-1 ceph-mon[79846]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 08 09:48:30 compute-1 ceph-mon[79846]: Cluster is now healthy
Dec 08 09:48:30 compute-1 ceph-mon[79846]: fsmap cephfs:0 1 up:standby
Dec 08 09:48:30 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.hhmzvb"}]: dispatch
Dec 08 09:48:30 compute-1 ceph-mon[79846]: fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:creating}
Dec 08 09:48:30 compute-1 ceph-mon[79846]: daemon mds.cephfs.compute-2.hhmzvb is now active in filesystem cephfs as rank 0
Dec 08 09:48:30 compute-1 ceph-mon[79846]: 6.e scrub starts
Dec 08 09:48:30 compute-1 ceph-mon[79846]: 6.e scrub ok
Dec 08 09:48:30 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e5 new map
Dec 08 09:48:30 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           btime 2025-12-08T09:48:30:344635+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:30.344631+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24232}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24232 members: 24232
                                           [mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 2 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 08 09:48:30 compute-1 sudo[84847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:48:30 compute-1 sudo[84847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:30 compute-1 sudo[84847]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:30 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 08 09:48:30 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 08 09:48:30 compute-1 sudo[84872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:48:30 compute-1 sudo[84872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:31 compute-1 podman[84935]: 2025-12-08 09:48:31.305141981 +0000 UTC m=+0.045651182 container create e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:48:31 compute-1 systemd[1]: Started libpod-conmon-e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5.scope.
Dec 08 09:48:31 compute-1 ceph-mon[79846]: from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 08 09:48:31 compute-1 ceph-mon[79846]: 7.13 scrub starts
Dec 08 09:48:31 compute-1 ceph-mon[79846]: 7.13 scrub ok
Dec 08 09:48:31 compute-1 ceph-mon[79846]: mds.? [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] up:active
Dec 08 09:48:31 compute-1 ceph-mon[79846]: fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active}
Dec 08 09:48:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.tjxjxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 08 09:48:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.tjxjxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 08 09:48:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:31 compute-1 ceph-mon[79846]: 6.d scrub starts
Dec 08 09:48:31 compute-1 ceph-mon[79846]: 6.d scrub ok
Dec 08 09:48:31 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:48:31 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e6 new map
Dec 08 09:48:31 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           btime 2025-12-08T09:48:31:354977+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:30.344631+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24232}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24232 members: 24232
                                           [mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 2 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]
Dec 08 09:48:31 compute-1 podman[84935]: 2025-12-08 09:48:31.278854756 +0000 UTC m=+0.019363977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:48:31 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e7 new map
Dec 08 09:48:31 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           btime 2025-12-08T09:48:31:374596+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:30.344631+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24232}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24232 members: 24232
                                           [mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 2 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]
Dec 08 09:48:31 compute-1 podman[84935]: 2025-12-08 09:48:31.388777753 +0000 UTC m=+0.129286974 container init e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lehmann, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Dec 08 09:48:31 compute-1 podman[84935]: 2025-12-08 09:48:31.397621317 +0000 UTC m=+0.138130518 container start e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:48:31 compute-1 podman[84935]: 2025-12-08 09:48:31.400807119 +0000 UTC m=+0.141316320 container attach e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec 08 09:48:31 compute-1 vibrant_lehmann[84951]: 167 167
Dec 08 09:48:31 compute-1 systemd[1]: libpod-e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5.scope: Deactivated successfully.
Dec 08 09:48:31 compute-1 conmon[84951]: conmon e2b2bb2d005e6649f675 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5.scope/container/memory.events
Dec 08 09:48:31 compute-1 podman[84935]: 2025-12-08 09:48:31.405872584 +0000 UTC m=+0.146381785 container died e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 08 09:48:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-7ed63d4c072d5534156a62f4fdfbd7b910424a1dc61241044ae24146f283260f-merged.mount: Deactivated successfully.
Dec 08 09:48:31 compute-1 podman[84935]: 2025-12-08 09:48:31.442237389 +0000 UTC m=+0.182746590 container remove e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lehmann, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 08 09:48:31 compute-1 systemd[1]: libpod-conmon-e2b2bb2d005e6649f67580a8bd4223b1421d186ace704a2a4f519c78842153d5.scope: Deactivated successfully.
Dec 08 09:48:31 compute-1 systemd[1]: Reloading.
Dec 08 09:48:31 compute-1 systemd-rc-local-generator[84994]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:48:31 compute-1 systemd-sysv-generator[84999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:48:31 compute-1 systemd[1]: Reloading.
Dec 08 09:48:31 compute-1 systemd-rc-local-generator[85035]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:48:31 compute-1 systemd-sysv-generator[85039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:48:31 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 08 09:48:31 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 08 09:48:32 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.tjxjxt for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:48:32 compute-1 podman[85095]: 2025-12-08 09:48:32.320786341 +0000 UTC m=+0.045408245 container create 834566a75a481c616777bfc45afcf373bc9ba8159f313398b155fc41a823cc16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mds-cephfs-compute-1-tjxjxt, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 08 09:48:32 compute-1 ceph-mon[79846]: pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 2 op/s
Dec 08 09:48:32 compute-1 ceph-mon[79846]: Deploying daemon mds.cephfs.compute-1.tjxjxt on compute-1
Dec 08 09:48:32 compute-1 ceph-mon[79846]: 6.1d scrub starts
Dec 08 09:48:32 compute-1 ceph-mon[79846]: 6.1d scrub ok
Dec 08 09:48:32 compute-1 ceph-mon[79846]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 08 09:48:32 compute-1 ceph-mon[79846]: mds.? [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] up:boot
Dec 08 09:48:32 compute-1 ceph-mon[79846]: fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 1 up:standby
Dec 08 09:48:32 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ywanut"}]: dispatch
Dec 08 09:48:32 compute-1 ceph-mon[79846]: fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 1 up:standby
Dec 08 09:48:32 compute-1 ceph-mon[79846]: 6.19 scrub starts
Dec 08 09:48:32 compute-1 ceph-mon[79846]: 6.19 scrub ok
Dec 08 09:48:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed8a26ca0703a6026c901e62ab0028a2a9b0394937a0ad0c2ecc11fb0a936e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed8a26ca0703a6026c901e62ab0028a2a9b0394937a0ad0c2ecc11fb0a936e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed8a26ca0703a6026c901e62ab0028a2a9b0394937a0ad0c2ecc11fb0a936e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed8a26ca0703a6026c901e62ab0028a2a9b0394937a0ad0c2ecc11fb0a936e3/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.tjxjxt supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:32 compute-1 podman[85095]: 2025-12-08 09:48:32.393202791 +0000 UTC m=+0.117824725 container init 834566a75a481c616777bfc45afcf373bc9ba8159f313398b155fc41a823cc16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mds-cephfs-compute-1-tjxjxt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 08 09:48:32 compute-1 podman[85095]: 2025-12-08 09:48:32.301150407 +0000 UTC m=+0.025772361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:48:32 compute-1 podman[85095]: 2025-12-08 09:48:32.405053261 +0000 UTC m=+0.129675215 container start 834566a75a481c616777bfc45afcf373bc9ba8159f313398b155fc41a823cc16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mds-cephfs-compute-1-tjxjxt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 08 09:48:32 compute-1 bash[85095]: 834566a75a481c616777bfc45afcf373bc9ba8159f313398b155fc41a823cc16
Dec 08 09:48:32 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.tjxjxt for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:48:32 compute-1 ceph-mds[85114]: set uid:gid to 167:167 (ceph:ceph)
Dec 08 09:48:32 compute-1 ceph-mds[85114]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec 08 09:48:32 compute-1 ceph-mds[85114]: main not setting numa affinity
Dec 08 09:48:32 compute-1 ceph-mds[85114]: pidfile_write: ignore empty --pid-file
Dec 08 09:48:32 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mds-cephfs-compute-1-tjxjxt[85110]: starting mds.cephfs.compute-1.tjxjxt at 
Dec 08 09:48:32 compute-1 ceph-mds[85114]: mds.cephfs.compute-1.tjxjxt Updating MDS map to version 7 from mon.2
Dec 08 09:48:32 compute-1 sudo[84872]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:32 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 08 09:48:32 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 08 09:48:32 compute-1 sudo[85133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:48:32 compute-1 sudo[85133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:32 compute-1 sudo[85133]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:33 compute-1 sudo[85158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:48:33 compute-1 sudo[85158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:33 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e8 new map
Dec 08 09:48:33 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           btime 2025-12-08T09:48:33:354090+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:30.344631+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24232}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24232 members: 24232
                                           [mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 2 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.tjxjxt{-1:24218} state up:standby seq 1 addr [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] compat {c=[1],r=[1],i=[1fff]}]
Dec 08 09:48:33 compute-1 ceph-mds[85114]: mds.cephfs.compute-1.tjxjxt Updating MDS map to version 8 from mon.2
Dec 08 09:48:33 compute-1 ceph-mds[85114]: mds.cephfs.compute-1.tjxjxt Monitors have assigned me to become a standby
Dec 08 09:48:33 compute-1 podman[85223]: 2025-12-08 09:48:33.407136082 +0000 UTC m=+0.042534193 container create ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 08 09:48:33 compute-1 systemd[1]: Started libpod-conmon-ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376.scope.
Dec 08 09:48:33 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:48:33 compute-1 podman[85223]: 2025-12-08 09:48:33.478506612 +0000 UTC m=+0.113904753 container init ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 08 09:48:33 compute-1 podman[85223]: 2025-12-08 09:48:33.486202773 +0000 UTC m=+0.121600874 container start ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 08 09:48:33 compute-1 podman[85223]: 2025-12-08 09:48:33.390947517 +0000 UTC m=+0.026345648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:48:33 compute-1 podman[85223]: 2025-12-08 09:48:33.488980773 +0000 UTC m=+0.124378934 container attach ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 08 09:48:33 compute-1 condescending_chatelet[85240]: 167 167
Dec 08 09:48:33 compute-1 systemd[1]: libpod-ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376.scope: Deactivated successfully.
Dec 08 09:48:33 compute-1 conmon[85240]: conmon ca9449a660aeff14e7a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376.scope/container/memory.events
Dec 08 09:48:33 compute-1 podman[85223]: 2025-12-08 09:48:33.493744119 +0000 UTC m=+0.129142230 container died ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 08 09:48:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-f6bd80e8e35c7a5a703f6e9018b8b7c2e54ecd72300b9751766cd8dca6e5df92-merged.mount: Deactivated successfully.
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:33 compute-1 ceph-mon[79846]: pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:33 compute-1 ceph-mon[79846]: Creating key for client.nfs.cephfs.0.0.compute-1.drrxym
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 08 09:48:33 compute-1 ceph-mon[79846]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/2405458525' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 08 09:48:33 compute-1 ceph-mon[79846]: Rados config object exists: conf-nfs.cephfs
Dec 08 09:48:33 compute-1 ceph-mon[79846]: Creating key for client.nfs.cephfs.0.0.compute-1.drrxym-rgw
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 08 09:48:33 compute-1 ceph-mon[79846]: Bind address in nfs.cephfs.0.0.compute-1.drrxym's ganesha conf is defaulting to empty
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:33 compute-1 ceph-mon[79846]: Deploying daemon nfs.cephfs.0.0.compute-1.drrxym on compute-1
Dec 08 09:48:33 compute-1 ceph-mon[79846]: 6.1a scrub starts
Dec 08 09:48:33 compute-1 ceph-mon[79846]: 6.1a scrub ok
Dec 08 09:48:33 compute-1 ceph-mon[79846]: mds.? [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] up:boot
Dec 08 09:48:33 compute-1 ceph-mon[79846]: fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 2 up:standby
Dec 08 09:48:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.tjxjxt"}]: dispatch
Dec 08 09:48:33 compute-1 podman[85223]: 2025-12-08 09:48:33.529123825 +0000 UTC m=+0.164521936 container remove ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Dec 08 09:48:33 compute-1 systemd[1]: libpod-conmon-ca9449a660aeff14e7a91b986a83834f570f871a1c9cfbaed9e7f7bf5cb9c376.scope: Deactivated successfully.
Dec 08 09:48:33 compute-1 systemd[1]: Reloading.
Dec 08 09:48:33 compute-1 systemd-sysv-generator[85288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:48:33 compute-1 systemd-rc-local-generator[85279]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:48:33 compute-1 systemd[1]: Reloading.
Dec 08 09:48:33 compute-1 systemd-rc-local-generator[85328]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:48:33 compute-1 systemd-sysv-generator[85332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:48:34 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.drrxym for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:48:34 compute-1 podman[85381]: 2025-12-08 09:48:34.304147624 +0000 UTC m=+0.045727354 container create 23497d82ee9ea70335fca0f3c4309147d81269293469bcfee1aca51ee00d5dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 08 09:48:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe7d6cba994e88c567f10cfe5d82a7f4a8ba2f59e829574759d6247bd8b2df6/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe7d6cba994e88c567f10cfe5d82a7f4a8ba2f59e829574759d6247bd8b2df6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe7d6cba994e88c567f10cfe5d82a7f4a8ba2f59e829574759d6247bd8b2df6/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe7d6cba994e88c567f10cfe5d82a7f4a8ba2f59e829574759d6247bd8b2df6/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.drrxym-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:34 compute-1 podman[85381]: 2025-12-08 09:48:34.365102055 +0000 UTC m=+0.106681805 container init 23497d82ee9ea70335fca0f3c4309147d81269293469bcfee1aca51ee00d5dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:48:34 compute-1 podman[85381]: 2025-12-08 09:48:34.371579171 +0000 UTC m=+0.113158901 container start 23497d82ee9ea70335fca0f3c4309147d81269293469bcfee1aca51ee00d5dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:48:34 compute-1 bash[85381]: 23497d82ee9ea70335fca0f3c4309147d81269293469bcfee1aca51ee00d5dc9
Dec 08 09:48:34 compute-1 podman[85381]: 2025-12-08 09:48:34.28449787 +0000 UTC m=+0.026077680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:48:34 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e9 new map
Dec 08 09:48:34 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           btime 2025-12-08T09:48:34:364938+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:33.376389+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24232}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24232 members: 24232
                                           [mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.tjxjxt{-1:24218} state up:standby seq 1 addr [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] compat {c=[1],r=[1],i=[1fff]}]
Dec 08 09:48:34 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.drrxym for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 08 09:48:34 compute-1 sudo[85158]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 08 09:48:34 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:34 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/29586248' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 08 09:48:34 compute-1 ceph-mon[79846]: mds.? [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] up:active
Dec 08 09:48:34 compute-1 ceph-mon[79846]: fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 2 up:standby
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 08 09:48:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:34 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:35 compute-1 ceph-mon[79846]: Creating key for client.nfs.cephfs.1.0.compute-2.wmyfrt
Dec 08 09:48:35 compute-1 ceph-mon[79846]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 08 09:48:35 compute-1 ceph-mon[79846]: pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s wr, 3 op/s
Dec 08 09:48:35 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e10 new map
Dec 08 09:48:35 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e10 print_map
                                           e10
                                           btime 2025-12-08T09:48:35:579331+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:33.376389+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24232}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24232 members: 24232
                                           [mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.tjxjxt{-1:24218} state up:standby seq 1 addr [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] compat {c=[1],r=[1],i=[1fff]}]
Dec 08 09:48:36 compute-1 ceph-mon[79846]: mds.? [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] up:standby
Dec 08 09:48:36 compute-1 ceph-mon[79846]: fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 2 up:standby
Dec 08 09:48:36 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/1716906672' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 08 09:48:36 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:36 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e11 new map
Dec 08 09:48:36 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).mds e11 print_map
                                           e11
                                           btime 2025-12-08T09:48:36:799335+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-08T09:48:11.623571+0000
                                           modified        2025-12-08T09:48:33.376389+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24232}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24232 members: 24232
                                           [mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.tjxjxt{-1:24218} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] compat {c=[1],r=[1],i=[1fff]}]
Dec 08 09:48:36 compute-1 ceph-mds[85114]: mds.cephfs.compute-1.tjxjxt Updating MDS map to version 11 from mon.2
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 08 09:48:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 08 09:48:37 compute-1 ceph-mon[79846]: pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s wr, 3 op/s
Dec 08 09:48:37 compute-1 ceph-mon[79846]: mds.? [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] up:standby
Dec 08 09:48:37 compute-1 ceph-mon[79846]: fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 2 up:standby
Dec 08 09:48:37 compute-1 ceph-mon[79846]: from='client.? 192.168.122.100:0/3042081611' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 08 09:48:37 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 08 09:48:37 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 08 09:48:37 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 08 09:48:37 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 08 09:48:37 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:38 compute-1 ceph-mon[79846]: Rados config object exists: conf-nfs.cephfs
Dec 08 09:48:38 compute-1 ceph-mon[79846]: Creating key for client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw
Dec 08 09:48:38 compute-1 ceph-mon[79846]: Bind address in nfs.cephfs.1.0.compute-2.wmyfrt's ganesha conf is defaulting to empty
Dec 08 09:48:38 compute-1 ceph-mon[79846]: Deploying daemon nfs.cephfs.1.0.compute-2.wmyfrt on compute-2
Dec 08 09:48:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 08 09:48:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 08 09:48:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 08 09:48:39 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:39 compute-1 ceph-mon[79846]: pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 4 op/s
Dec 08 09:48:39 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:39 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:39 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:39 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 08 09:48:39 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 08 09:48:39 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 08 09:48:39 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 08 09:48:39 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:40 compute-1 ceph-mon[79846]: Creating key for client.nfs.cephfs.2.0.compute-0.cuvvno
Dec 08 09:48:40 compute-1 ceph-mon[79846]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 08 09:48:41 compute-1 sshd[1006]: Timeout before authentication for connection from 180.76.105.69 to 38.102.83.181, pid = 79511
Dec 08 09:48:41 compute-1 ceph-mon[79846]: pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 4 op/s
Dec 08 09:48:42 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:42 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 08 09:48:42 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 08 09:48:42 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 08 09:48:43 compute-1 ceph-mon[79846]: pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Dec 08 09:48:43 compute-1 ceph-mon[79846]: Rados config object exists: conf-nfs.cephfs
Dec 08 09:48:43 compute-1 ceph-mon[79846]: Creating key for client.nfs.cephfs.2.0.compute-0.cuvvno-rgw
Dec 08 09:48:43 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 08 09:48:43 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 08 09:48:43 compute-1 ceph-mon[79846]: Bind address in nfs.cephfs.2.0.compute-0.cuvvno's ganesha conf is defaulting to empty
Dec 08 09:48:43 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:48:43 compute-1 ceph-mon[79846]: Deploying daemon nfs.cephfs.2.0.compute-0.cuvvno on compute-0
Dec 08 09:48:44 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:45 compute-1 sudo[85452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:48:45 compute-1 sudo[85452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:45 compute-1 sudo[85452]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:45 compute-1 sudo[85477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:48:45 compute-1 sudo[85477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:48:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 08 09:48:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 08 09:48:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 08 09:48:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 08 09:48:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 08 09:48:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 08 09:48:45 compute-1 ceph-mon[79846]: pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Dec 08 09:48:45 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:45 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:45 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:45 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:45 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:45 compute-1 ceph-mon[79846]: Deploying daemon haproxy.nfs.cephfs.compute-1.opvoqw on compute-1
Dec 08 09:48:46 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:47 compute-1 ceph-mon[79846]: pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Dec 08 09:48:48 compute-1 podman[85541]: 2025-12-08 09:48:48.235247929 +0000 UTC m=+2.722405429 container create 987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f (image=quay.io/ceph/haproxy:2.3, name=suspicious_nash)
Dec 08 09:48:48 compute-1 podman[85541]: 2025-12-08 09:48:48.214488625 +0000 UTC m=+2.701646175 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 08 09:48:48 compute-1 systemd[1]: Started libpod-conmon-987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f.scope.
Dec 08 09:48:48 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:48:48 compute-1 podman[85541]: 2025-12-08 09:48:48.321010618 +0000 UTC m=+2.808168158 container init 987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f (image=quay.io/ceph/haproxy:2.3, name=suspicious_nash)
Dec 08 09:48:48 compute-1 podman[85541]: 2025-12-08 09:48:48.326985485 +0000 UTC m=+2.814142985 container start 987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f (image=quay.io/ceph/haproxy:2.3, name=suspicious_nash)
Dec 08 09:48:48 compute-1 podman[85541]: 2025-12-08 09:48:48.330036485 +0000 UTC m=+2.817194005 container attach 987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f (image=quay.io/ceph/haproxy:2.3, name=suspicious_nash)
Dec 08 09:48:48 compute-1 suspicious_nash[85658]: 0 0
Dec 08 09:48:48 compute-1 systemd[1]: libpod-987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f.scope: Deactivated successfully.
Dec 08 09:48:48 compute-1 conmon[85658]: conmon 987dd3bdc4c580735a65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f.scope/container/memory.events
Dec 08 09:48:48 compute-1 podman[85541]: 2025-12-08 09:48:48.3366096 +0000 UTC m=+2.823767120 container died 987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f (image=quay.io/ceph/haproxy:2.3, name=suspicious_nash)
Dec 08 09:48:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-47c5eea0431c05661926c1d2fc45bce0f3ecf30e37b675f7afeee1bb57bef96d-merged.mount: Deactivated successfully.
Dec 08 09:48:48 compute-1 podman[85541]: 2025-12-08 09:48:48.377758798 +0000 UTC m=+2.864916298 container remove 987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f (image=quay.io/ceph/haproxy:2.3, name=suspicious_nash)
Dec 08 09:48:48 compute-1 systemd[1]: libpod-conmon-987dd3bdc4c580735a656b733a49e376ba7bb3b39eb7b52e102c94caa2b7b46f.scope: Deactivated successfully.
Dec 08 09:48:48 compute-1 systemd[1]: Reloading.
Dec 08 09:48:48 compute-1 systemd-rc-local-generator[85709]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:48:48 compute-1 systemd-sysv-generator[85713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:48:48 compute-1 systemd[1]: Reloading.
Dec 08 09:48:48 compute-1 systemd-rc-local-generator[85747]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:48:48 compute-1 systemd-sysv-generator[85752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:48:49 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.opvoqw for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:48:49 compute-1 podman[85803]: 2025-12-08 09:48:49.346728386 +0000 UTC m=+0.074743213 container create b9fd0246c48061305c7811136c6a39b092dbdac0fc6cb0fd31313ce10b304fdc (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw)
Dec 08 09:48:49 compute-1 podman[85803]: 2025-12-08 09:48:49.316536423 +0000 UTC m=+0.044551310 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 08 09:48:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae4d2006cafe92ca6522390241163fa714bf77cfb9054028ada180cc3d9b0f3/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 08 09:48:49 compute-1 podman[85803]: 2025-12-08 09:48:49.430384792 +0000 UTC m=+0.158399659 container init b9fd0246c48061305c7811136c6a39b092dbdac0fc6cb0fd31313ce10b304fdc (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw)
Dec 08 09:48:49 compute-1 podman[85803]: 2025-12-08 09:48:49.439786661 +0000 UTC m=+0.167801488 container start b9fd0246c48061305c7811136c6a39b092dbdac0fc6cb0fd31313ce10b304fdc (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw)
Dec 08 09:48:49 compute-1 bash[85803]: b9fd0246c48061305c7811136c6a39b092dbdac0fc6cb0fd31313ce10b304fdc
Dec 08 09:48:49 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.opvoqw for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:48:49 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw[85819]: [NOTICE] 341/094849 (2) : New worker #1 (4) forked
Dec 08 09:48:49 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:49 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:49 compute-1 sudo[85477]: pam_unix(sudo:session): session closed for user root
Dec 08 09:48:49 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:49 compute-1 ceph-mon[79846]: pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 2.6 KiB/s wr, 9 op/s
Dec 08 09:48:49 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:49 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:49 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:50 compute-1 ceph-mon[79846]: Deploying daemon haproxy.nfs.cephfs.compute-0.dvsreo on compute-0
Dec 08 09:48:51 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:51 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:51 compute-1 ceph-mon[79846]: pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 08 09:48:53 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:53 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:53 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:53 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:53 compute-1 ceph-mon[79846]: pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 08 09:48:53 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:53 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:53 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:54 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:48:54 compute-1 ceph-mon[79846]: Deploying daemon haproxy.nfs.cephfs.compute-2.mtmwtv on compute-2
Dec 08 09:48:55 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:55 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:55 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:55 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:55 compute-1 ceph-mon[79846]: pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 08 09:48:57 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:57 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:57 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:57 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:57 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:57 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78002200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:58 compute-1 ceph-mon[79846]: pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 08 09:48:58 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:58 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:58 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:58 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:48:59 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 08 09:48:59 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 08 09:48:59 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 08 09:48:59 compute-1 ceph-mon[79846]: Deploying daemon keepalived.nfs.cephfs.compute-0.qxgfft on compute-0
Dec 08 09:48:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:59 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:59 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:48:59 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:48:59 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:00 compute-1 ceph-mon[79846]: pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Dec 08 09:49:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:01 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78002200 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:01 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:01 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:02 compute-1 ceph-mon[79846]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 08 09:49:02 compute-1 sudo[85837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:49:02 compute-1 sudo[85837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:02 compute-1 sudo[85837]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:02 compute-1 sudo[85862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:49:02 compute-1 sudo[85862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:03 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:03 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:03 compute-1 ceph-mon[79846]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 08 09:49:03 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:03 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:03 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 08 09:49:03 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 08 09:49:03 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 08 09:49:03 compute-1 ceph-mon[79846]: Deploying daemon keepalived.nfs.cephfs.compute-1.khfxdl on compute-1
Dec 08 09:49:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:03 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:03 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:04 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:04 compute-1 sshd-session[85836]: error: kex_exchange_identification: read: Connection timed out
Dec 08 09:49:04 compute-1 sshd-session[85836]: banner exchange: Connection from 120.48.123.76 port 60584: Connection timed out
Dec 08 09:49:05 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:05 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:05 compute-1 ceph-mon[79846]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 08 09:49:05 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:05 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:05 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:05 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:06 compute-1 podman[85927]: 2025-12-08 09:49:06.454252343 +0000 UTC m=+3.244046068 container create 6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c (image=quay.io/ceph/keepalived:2.2.4, name=determined_allen, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, release=1793, version=2.2.4)
Dec 08 09:49:06 compute-1 systemd[1]: Started libpod-conmon-6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c.scope.
Dec 08 09:49:06 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:49:06 compute-1 podman[85927]: 2025-12-08 09:49:06.533337463 +0000 UTC m=+3.323131268 container init 6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c (image=quay.io/ceph/keepalived:2.2.4, name=determined_allen, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, version=2.2.4, io.openshift.tags=Ceph keepalived, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container)
Dec 08 09:49:06 compute-1 podman[85927]: 2025-12-08 09:49:06.438998841 +0000 UTC m=+3.228792576 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 08 09:49:06 compute-1 podman[85927]: 2025-12-08 09:49:06.546031219 +0000 UTC m=+3.335824934 container start 6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c (image=quay.io/ceph/keepalived:2.2.4, name=determined_allen, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, com.redhat.component=keepalived-container, release=1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=2.2.4, build-date=2023-02-22T09:23:20, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 08 09:49:06 compute-1 podman[85927]: 2025-12-08 09:49:06.549796931 +0000 UTC m=+3.339590696 container attach 6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c (image=quay.io/ceph/keepalived:2.2.4, name=determined_allen, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.expose-services=, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., name=keepalived, release=1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 08 09:49:06 compute-1 determined_allen[86023]: 0 0
Dec 08 09:49:06 compute-1 systemd[1]: libpod-6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c.scope: Deactivated successfully.
Dec 08 09:49:06 compute-1 podman[85927]: 2025-12-08 09:49:06.554363656 +0000 UTC m=+3.344157441 container died 6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c (image=quay.io/ceph/keepalived:2.2.4, name=determined_allen, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, version=2.2.4, vcs-type=git, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 08 09:49:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-d9798560e9f1107872e2aeead6e270a616c9f99866e7e8d04ad6636a8439b0cc-merged.mount: Deactivated successfully.
Dec 08 09:49:06 compute-1 podman[85927]: 2025-12-08 09:49:06.593538825 +0000 UTC m=+3.383332550 container remove 6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c (image=quay.io/ceph/keepalived:2.2.4, name=determined_allen, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., version=2.2.4, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec 08 09:49:06 compute-1 systemd[1]: libpod-conmon-6e316c070c6df58ffe1802ce414c8b3368900edd5d83d7dc60eef465a5b2659c.scope: Deactivated successfully.
Dec 08 09:49:06 compute-1 systemd[1]: Reloading.
Dec 08 09:49:06 compute-1 systemd-sysv-generator[86075]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:49:06 compute-1 systemd-rc-local-generator[86072]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:49:07 compute-1 systemd[1]: Reloading.
Dec 08 09:49:07 compute-1 systemd-rc-local-generator[86109]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 08 09:49:07 compute-1 systemd-sysv-generator[86115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 08 09:49:07 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.khfxdl for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:07 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:07 compute-1 podman[86168]: 2025-12-08 09:49:07.501247331 +0000 UTC m=+0.045223300 container create ef47fc5b71b1e6ae6538605cddf4e1fdf4707b8b994d55ceabe6b66724d9d061 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public)
Dec 08 09:49:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7e9df12815503916ea3366d84a78c602ab638631d3dbfdc0222eb2566ca186/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 08 09:49:07 compute-1 podman[86168]: 2025-12-08 09:49:07.554751444 +0000 UTC m=+0.098727453 container init ef47fc5b71b1e6ae6538605cddf4e1fdf4707b8b994d55ceabe6b66724d9d061 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.buildah.version=1.28.2)
Dec 08 09:49:07 compute-1 podman[86168]: 2025-12-08 09:49:07.56067108 +0000 UTC m=+0.104647049 container start ef47fc5b71b1e6ae6538605cddf4e1fdf4707b8b994d55ceabe6b66724d9d061 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Dec 08 09:49:07 compute-1 bash[86168]: ef47fc5b71b1e6ae6538605cddf4e1fdf4707b8b994d55ceabe6b66724d9d061
Dec 08 09:49:07 compute-1 podman[86168]: 2025-12-08 09:49:07.483259878 +0000 UTC m=+0.027235877 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 08 09:49:07 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.khfxdl for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: Starting VRRP child process, pid=4
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: Startup complete
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: (VI_0) Entering BACKUP STATE (init)
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:07 2025: VRRP_Script(check_backend) succeeded
Dec 08 09:49:07 compute-1 sudo[85862]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:07 compute-1 ceph-mon[79846]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:07 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:07 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:08 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:08 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:08 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:08 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 08 09:49:08 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 08 09:49:08 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 08 09:49:08 compute-1 ceph-mon[79846]: Deploying daemon keepalived.nfs.cephfs.compute-2.bcrsho on compute-2
Dec 08 09:49:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:09 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:09 compute-1 ceph-mon[79846]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 08 09:49:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:09 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:09 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:09 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:10 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:49:10 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Dec 08 09:49:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:11 2025: (VI_0) Entering MASTER STATE
Dec 08 09:49:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:11 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Dec 08 09:49:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl[86184]: Mon Dec  8 09:49:11 2025: (VI_0) Entering BACKUP STATE
Dec 08 09:49:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:11 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:11 compute-1 ceph-mon[79846]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 08 09:49:11 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:49:11 compute-1 ceph-mon[79846]: osdmap e50: 3 total, 3 up, 3 in
Dec 08 09:49:11 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:49:11 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Dec 08 09:49:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:11 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:11 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:12 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Dec 08 09:49:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:49:12 compute-1 ceph-mon[79846]: osdmap e51: 3 total, 3 up, 3 in
Dec 08 09:49:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:49:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:12 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:13 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:13 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Dec 08 09:49:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:13 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:13 compute-1 ceph-mon[79846]: Deploying daemon alertmanager.compute-0 on compute-0
Dec 08 09:49:13 compute-1 ceph-mon[79846]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 08 09:49:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:49:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:49:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:49:13 compute-1 ceph-mon[79846]: osdmap e52: 3 total, 3 up, 3 in
Dec 08 09:49:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:49:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:49:13 compute-1 ceph-mon[79846]: osdmap e53: 3 total, 3 up, 3 in
Dec 08 09:49:13 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 08 09:49:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:13 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:14 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Dec 08 09:49:14 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 54 pg[10.0( v 49'120 (0'0,49'120] local-lis/les=38/39 n=8 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54 pruub=8.765053749s) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 49'119 mlcod 49'119 active pruub 172.416519165s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:14 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 54 pg[10.0( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54 pruub=8.765053749s) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 49'119 mlcod 0'0 unknown pruub 172.416519165s@ mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 08 09:49:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:49:14 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:49:14 compute-1 ceph-mon[79846]: osdmap e54: 3 total, 3 up, 3 in
Dec 08 09:49:14 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:14 compute-1 sshd-session[86195]: Received disconnect from 95.128.196.223 port 45084:11: Bye Bye [preauth]
Dec 08 09:49:14 compute-1 sshd-session[86195]: Disconnected from authenticating user root 95.128.196.223 port 45084 [preauth]
Dec 08 09:49:15 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:15 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:15 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:15 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:15 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1b( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.11( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.7( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.12( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.10( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1f( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1d( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1e( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1c( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1a( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.19( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.18( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.6( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.3( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.4( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.5( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.b( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.d( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.9( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.8( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.a( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.c( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.e( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.f( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.2( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.13( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.14( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.15( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.16( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.17( v 49'120 lc 0'0 (0'0,49'120] local-lis/les=38/39 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.12( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1f( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.7( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1d( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.10( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1c( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1e( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1a( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.6( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.3( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.4( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.b( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.d( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.11( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.a( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.9( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.c( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.e( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.f( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.0( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 49'119 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.14( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.16( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.17( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 55 pg[10.15( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=38/38 les/c/f=39/39/0 sis=54) [0] r=0 lpr=54 pi=[38,54)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:15 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:15 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:15 compute-1 ceph-mon[79846]: pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:15 compute-1 ceph-mon[79846]: 8.15 deep-scrub starts
Dec 08 09:49:15 compute-1 ceph-mon[79846]: 8.15 deep-scrub ok
Dec 08 09:49:15 compute-1 ceph-mon[79846]: osdmap e55: 3 total, 3 up, 3 in
Dec 08 09:49:15 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:16 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 08 09:49:16 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 08 09:49:17 compute-1 ceph-mon[79846]: 9.16 scrub starts
Dec 08 09:49:17 compute-1 ceph-mon[79846]: 9.16 scrub ok
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:17 compute-1 ceph-mon[79846]: Regenerating cephadm self-signed grafana TLS certificates
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:17 compute-1 ceph-mon[79846]: Deploying daemon grafana.compute-0 on compute-0
Dec 08 09:49:17 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:17 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Dec 08 09:49:17 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:17 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:17 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 56 pg[12.0( v 49'44 (0'0,49'44] local-lis/les=47/48 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=9.310081482s) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 lcod 49'43 mlcod 49'43 active pruub 175.783447266s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:17 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 56 pg[12.0( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=9.310081482s) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 lcod 49'43 mlcod 0'0 unknown pruub 175.783447266s@ mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:17 compute-1 ceph-osd[77531]: bluestore(/var/lib/ceph/osd/ceph-0).collection(12.0_head 0x5640a5bebd40) operator()   moving buffer(0x5640a47b3068 space 0x5640a3bec690 0x0~1000 clean)
Dec 08 09:49:17 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 08 09:49:17 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 08 09:49:17 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:17 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:17 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:17 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:18 compute-1 ceph-mon[79846]: pgmap v46: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:18 compute-1 ceph-mon[79846]: 10.1b scrub starts
Dec 08 09:49:18 compute-1 ceph-mon[79846]: 10.1b scrub ok
Dec 08 09:49:18 compute-1 ceph-mon[79846]: 8.16 deep-scrub starts
Dec 08 09:49:18 compute-1 ceph-mon[79846]: 8.16 deep-scrub ok
Dec 08 09:49:18 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 08 09:49:18 compute-1 ceph-mon[79846]: osdmap e56: 3 total, 3 up, 3 in
Dec 08 09:49:18 compute-1 ceph-mon[79846]: 10.12 scrub starts
Dec 08 09:49:18 compute-1 ceph-mon[79846]: 10.12 scrub ok
Dec 08 09:49:18 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.11( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.13( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.10( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.12( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.15( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.4( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.7( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.6( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.9( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.8( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.a( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.c( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.b( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.f( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.e( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.d( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.5( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.2( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.3( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1e( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1f( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1a( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1b( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1c( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.18( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.19( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.16( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.17( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1( v 49'44 (0'0,49'44] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.14( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1d( v 49'44 lc 0'0 (0'0,49'44] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'43 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.11( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.12( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.13( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.15( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.4( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.10( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.7( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.6( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.9( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.a( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.c( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.b( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.f( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.e( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.2( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.8( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.3( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.d( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.0( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 49'43 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1e( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.5( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1a( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1f( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1b( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.18( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.19( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.17( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.16( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1c( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.14( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 57 pg[12.1d( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:18 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 08 09:49:18 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 08 09:49:19 compute-1 ceph-mon[79846]: 8.14 scrub starts
Dec 08 09:49:19 compute-1 ceph-mon[79846]: 8.14 scrub ok
Dec 08 09:49:19 compute-1 ceph-mon[79846]: osdmap e57: 3 total, 3 up, 3 in
Dec 08 09:49:19 compute-1 ceph-mon[79846]: 10.7 scrub starts
Dec 08 09:49:19 compute-1 ceph-mon[79846]: 10.7 scrub ok
Dec 08 09:49:19 compute-1 sshd-session[86199]: Invalid user user from 79.32.212.213 port 35200
Dec 08 09:49:19 compute-1 sshd-session[86199]: Received disconnect from 79.32.212.213 port 35200:11: Bye Bye [preauth]
Dec 08 09:49:19 compute-1 sshd-session[86199]: Disconnected from invalid user user 79.32.212.213 port 35200 [preauth]
Dec 08 09:49:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:19 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:19 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 08 09:49:19 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 08 09:49:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:19 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:19 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:19 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:20 compute-1 ceph-mon[79846]: pgmap v49: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:20 compute-1 ceph-mon[79846]: 8.10 scrub starts
Dec 08 09:49:20 compute-1 ceph-mon[79846]: 8.10 scrub ok
Dec 08 09:49:20 compute-1 ceph-mon[79846]: 10.1f scrub starts
Dec 08 09:49:20 compute-1 ceph-mon[79846]: 10.1f scrub ok
Dec 08 09:49:20 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 08 09:49:20 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 08 09:49:21 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:21 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780032a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:21 compute-1 ceph-mon[79846]: 9.3 scrub starts
Dec 08 09:49:21 compute-1 ceph-mon[79846]: 9.3 scrub ok
Dec 08 09:49:21 compute-1 ceph-mon[79846]: pgmap v50: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:21 compute-1 ceph-mon[79846]: 10.1d scrub starts
Dec 08 09:49:21 compute-1 ceph-mon[79846]: 10.1d scrub ok
Dec 08 09:49:21 compute-1 ceph-mon[79846]: 9.11 scrub starts
Dec 08 09:49:21 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:21 compute-1 ceph-mon[79846]: 9.11 scrub ok
Dec 08 09:49:21 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 08 09:49:21 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 08 09:49:21 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:21 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:21 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:21 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:22 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 08 09:49:22 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 08 09:49:23 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.19( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.8( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.17( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.11( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.086849213s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.238174438s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.10( v 57'47 (0'0,57'47] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.091133118s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=57'45 lcod 57'46 mlcod 57'46 active pruub 183.242523193s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.15( v 57'123 (0'0,57'123] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.519123077s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 57'122 mlcod 57'122 active pruub 180.670578003s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.13( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.090995789s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242462158s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.10( v 57'47 (0'0,57'47] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.091084480s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=57'45 lcod 57'46 mlcod 0'0 unknown NOTIFY pruub 183.242523193s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.13( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.090965271s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242462158s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.15( v 57'123 (0'0,57'123] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.519062996s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 57'122 mlcod 0'0 unknown NOTIFY pruub 180.670578003s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-mon[79846]: 10.10 scrub starts
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.14( v 57'123 (0'0,57'123] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.518656731s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 57'122 mlcod 57'122 active pruub 180.670516968s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.518555641s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670501709s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.14( v 57'123 (0'0,57'123] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.518587112s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 57'122 mlcod 0'0 unknown NOTIFY pruub 180.670516968s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.518529892s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670501709s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.4( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.090242386s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242507935s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.4( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.090214729s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242507935s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.518154144s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670486450s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.518134117s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670486450s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.12( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.090183258s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242431641s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.517933846s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670486450s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.7( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089979172s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242538452s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.517903328s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670486450s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.7( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089954376s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242538452s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.6( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089887619s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242553711s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.6( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089865685s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242553711s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-mon[79846]: 10.10 scrub ok
Dec 08 09:49:23 compute-1 ceph-mon[79846]: 8.17 scrub starts
Dec 08 09:49:23 compute-1 ceph-mon[79846]: 8.17 scrub ok
Dec 08 09:49:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 08 09:49:23 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.12( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089852333s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242431641s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.11( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.086796761s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.238174438s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.f( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.517017365s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670471191s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.f( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.516951561s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670471191s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.9( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089052200s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242584229s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.9( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089023590s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242584229s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.8( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089287758s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243041992s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.8( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.089268684s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243041992s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.a( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.088755608s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242630005s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.c( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.088546753s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242889404s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.b( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.088532448s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242919922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.b( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.088499069s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242919922s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.516087532s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670547485s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.516048431s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670547485s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.c( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.088503838s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242889404s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.e( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.088437080s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.242980957s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.e( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.088421822s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242980957s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.a( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.088623047s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.242630005s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.3( v 57'123 (0'0,57'123] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.515381813s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 57'122 mlcod 57'122 active pruub 180.670303345s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.3( v 57'123 (0'0,57'123] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.515333176s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 57'122 mlcod 0'0 unknown NOTIFY pruub 180.670303345s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.2( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.087953568s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243026733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.2( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.087926865s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243026733s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.4( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.515010834s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670303345s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.514918327s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670303345s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.4( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.514941216s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670303345s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.514888763s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670303345s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.514718056s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670288086s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.514693260s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670288086s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.3( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.087354660s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243103027s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.1e( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.087366104s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243148804s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.1e( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.087340355s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243148804s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.514277458s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670181274s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.514248848s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670181274s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.1c( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.087293625s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243392944s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.1c( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.087267876s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243392944s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.3( v 49'44 (0'0,49'44] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.087189674s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243103027s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.1a( v 57'47 (0'0,57'47] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.086260796s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=57'45 lcod 57'46 mlcod 57'46 active pruub 183.243194580s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.1a( v 57'47 (0'0,57'47] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.086214066s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=57'45 lcod 57'46 mlcod 0'0 unknown NOTIFY pruub 183.243194580s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.1e( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.513142586s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670166016s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.1e( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.513122559s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670166016s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.19( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.086171150s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243270874s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.19( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.086145401s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243270874s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.10( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.512879372s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670104980s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.10( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.512839317s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670104980s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.18( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.085934639s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243240356s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.17( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.085954666s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243286133s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.18( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.085905075s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243240356s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.17( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.085936546s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243286133s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.1d( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.085654259s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active pruub 183.243362427s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.11( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.512632370s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.670364380s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[12.1d( v 49'44 (0'0,49'44] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.085634232s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.243362427s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.11( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.512611389s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.670364380s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.12( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.512114525s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.669967651s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.506289482s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 180.664245605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.506199837s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.664245605s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 58 pg[10.12( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.511996269s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.669967651s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:23 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:23 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:23 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 08 09:49:23 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 08 09:49:23 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:23 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88001340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:23 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:23 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:24 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 08 09:49:24 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 08 09:49:24 compute-1 ceph-mon[79846]: pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:24 compute-1 ceph-mon[79846]: 10.1c scrub starts
Dec 08 09:49:24 compute-1 ceph-mon[79846]: 10.1c scrub ok
Dec 08 09:49:24 compute-1 ceph-mon[79846]: 8.3 scrub starts
Dec 08 09:49:24 compute-1 ceph-mon[79846]: 8.3 scrub ok
Dec 08 09:49:24 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:49:24 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:49:24 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:49:24 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 08 09:49:24 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 08 09:49:24 compute-1 ceph-mon[79846]: osdmap e58: 3 total, 3 up, 3 in
Dec 08 09:49:24 compute-1 ceph-mon[79846]: 10.17 scrub starts
Dec 08 09:49:24 compute-1 ceph-mon[79846]: 10.17 scrub ok
Dec 08 09:49:24 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.15( v 57'123 (0'0,57'123] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 57'122 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.14( v 57'123 (0'0,57'123] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 57'122 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.14( v 57'123 (0'0,57'123] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 57'122 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.15( v 57'123 (0'0,57'123] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 57'122 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.f( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.f( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.4( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.4( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.3( v 57'123 (0'0,57'123] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 57'122 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.3( v 57'123 (0'0,57'123] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 57'122 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=54/55 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.1e( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.10( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.1e( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.12( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.12( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.10( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.11( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.11( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.17( v 35'12 (0'0,35'12] local-lis/les=58/59 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=54/55 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.14( v 35'12 (0'0,35'12] local-lis/les=58/59 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.12( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.14( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.1( v 49'2 (0'0,49'2] local-lis/les=58/59 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.f( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.8( v 35'12 (0'0,35'12] local-lis/les=58/59 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.5( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.4( v 35'12 (0'0,35'12] local-lis/les=58/59 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.7( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.1b( v 35'12 (0'0,35'12] local-lis/les=58/59 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.4( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.18( v 35'12 (0'0,35'12] local-lis/les=58/59 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.1d( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.1c( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.1b( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.12( v 35'12 (0'0,35'12] local-lis/les=58/59 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.10( v 35'12 (0'0,35'12] local-lis/les=58/59 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.1e( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[8.19( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=58/59 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 59 pg[11.1a( v 49'2 (0'0,49'2] local-lis/les=58/59 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:24 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:25 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:25 compute-1 ceph-mon[79846]: 9.15 scrub starts
Dec 08 09:49:25 compute-1 ceph-mon[79846]: 9.15 scrub ok
Dec 08 09:49:25 compute-1 ceph-mon[79846]: pgmap v53: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:25 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 08 09:49:25 compute-1 ceph-mon[79846]: 10.16 scrub starts
Dec 08 09:49:25 compute-1 ceph-mon[79846]: 10.16 scrub ok
Dec 08 09:49:25 compute-1 ceph-mon[79846]: osdmap e59: 3 total, 3 up, 3 in
Dec 08 09:49:25 compute-1 ceph-mon[79846]: 9.14 scrub starts
Dec 08 09:49:25 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:25 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:25 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:25 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:25 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:25 compute-1 ceph-mon[79846]: Deploying daemon haproxy.rgw.default.compute-0.dmkdub on compute-0
Dec 08 09:49:25 compute-1 ceph-mon[79846]: 9.14 scrub ok
Dec 08 09:49:25 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Dec 08 09:49:25 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Dec 08 09:49:25 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.4( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.15( v 57'123 (0'0,57'123] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=57'123 lcod 57'122 mlcod 0'0 active+remapped mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.14( v 57'123 (0'0,57'123] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=57'123 lcod 57'122 mlcod 0'0 active+remapped mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] async=[1] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.10( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.11( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.12( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.f( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.1e( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[54,59)/1 crt=49'120 lcod 0'0 mlcod 0'0 active+remapped mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 60 pg[10.3( v 57'123 (0'0,57'123] local-lis/les=59/60 n=1 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [2]/[0] async=[2] r=0 lpr=59 pi=[54,59)/1 crt=57'123 lcod 57'122 mlcod 0'0 active+remapped mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:25 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:25 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88001340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:26 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 08 09:49:26 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 08 09:49:26 compute-1 ceph-mon[79846]: 12.15 scrub starts
Dec 08 09:49:26 compute-1 ceph-mon[79846]: 12.15 scrub ok
Dec 08 09:49:26 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 08 09:49:26 compute-1 ceph-mon[79846]: osdmap e60: 3 total, 3 up, 3 in
Dec 08 09:49:26 compute-1 ceph-mon[79846]: 8.6 scrub starts
Dec 08 09:49:26 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:26 compute-1 ceph-mon[79846]: 8.6 scrub ok
Dec 08 09:49:26 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 08 09:49:26 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.15( v 60'130 (0'0,60'130] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942734718s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 60'129 active pruub 190.582412720s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.15( v 60'130 (0'0,60'130] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942668915s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 0'0 unknown NOTIFY pruub 190.582412720s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942261696s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.582473755s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.944141388s) [2] async=[2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.584365845s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.1( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.944124222s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.584365845s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942220688s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.582473755s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942071915s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.582183838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.941664696s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.582183838s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.938225746s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.579223633s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.938202858s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.579223633s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.4( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.940489769s) [2] async=[2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.581787109s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.4( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.940470695s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.581787109s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.940964699s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.582427979s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.940940857s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.582427979s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.f( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.943136215s) [2] async=[2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.584533691s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.3( v 60'130 (0'0,60'130] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942975998s) [2] async=[2] r=-1 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 60'129 active pruub 190.584640503s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.937445641s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.579162598s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.937421799s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.579162598s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.3( v 60'130 (0'0,60'130] local-lis/les=59/60 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942886353s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 0'0 unknown NOTIFY pruub 190.584640503s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.f( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942621231s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.584533691s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.14( v 60'130 (0'0,60'130] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.940387726s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 60'129 active pruub 190.582504272s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.940331459s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.582489014s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.14( v 60'130 (0'0,60'130] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.940316200s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 0'0 unknown NOTIFY pruub 190.582504272s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.940079689s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.582489014s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.1e( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942037582s) [2] async=[2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.584548950s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.1e( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.942008018s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.584548950s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.11( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.941708565s) [2] async=[2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.584457397s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.11( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.941686630s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.584457397s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.12( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.941639900s) [2] async=[2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.584503174s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.935973167s) [1] async=[1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.578948975s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.12( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.941600800s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.584503174s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.935933113s) [1] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.578948975s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.10( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.941329002s) [2] async=[2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active pruub 190.584411621s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:26 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 61 pg[10.10( v 49'120 (0'0,49'120] local-lis/les=59/60 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=14.941165924s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.584411621s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:27 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:27 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 08 09:49:27 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 08 09:49:27 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Dec 08 09:49:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:27 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:27 compute-1 ceph-mon[79846]: pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:27 compute-1 ceph-mon[79846]: 10.0 scrub starts
Dec 08 09:49:27 compute-1 ceph-mon[79846]: 10.0 scrub ok
Dec 08 09:49:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 08 09:49:27 compute-1 ceph-mon[79846]: osdmap e61: 3 total, 3 up, 3 in
Dec 08 09:49:27 compute-1 ceph-mon[79846]: 8.f scrub starts
Dec 08 09:49:27 compute-1 ceph-mon[79846]: 8.f scrub ok
Dec 08 09:49:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:27 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:27 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e580016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:28 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:28 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 08 09:49:28 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 08 09:49:28 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Dec 08 09:49:28 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.847025096s ======
Dec 08 09:49:28 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:28.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.847025096s
Dec 08 09:49:28 compute-1 ceph-mon[79846]: Deploying daemon haproxy.rgw.default.compute-2.akikwx on compute-2
Dec 08 09:49:28 compute-1 ceph-mon[79846]: 10.e scrub starts
Dec 08 09:49:28 compute-1 ceph-mon[79846]: 10.e scrub ok
Dec 08 09:49:28 compute-1 ceph-mon[79846]: 9.10 scrub starts
Dec 08 09:49:28 compute-1 ceph-mon[79846]: 9.10 scrub ok
Dec 08 09:49:28 compute-1 ceph-mon[79846]: osdmap e62: 3 total, 3 up, 3 in
Dec 08 09:49:28 compute-1 ceph-mon[79846]: 8.11 scrub starts
Dec 08 09:49:28 compute-1 ceph-mon[79846]: 8.11 scrub ok
Dec 08 09:49:28 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 08 09:49:28 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 08 09:49:28 compute-1 ceph-mon[79846]: osdmap e63: 3 total, 3 up, 3 in
Dec 08 09:49:29 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:29 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88001340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:29 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 08 09:49:29 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 08 09:49:29 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:29 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Dec 08 09:49:29 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:29 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:29 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 08 09:49:29 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:29.952428) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 08 09:49:29 compute-1 ceph-mon[79846]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 08 09:49:29 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187369952575, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7004, "num_deletes": 256, "total_data_size": 19016526, "memory_usage": 20084096, "flush_reason": "Manual Compaction"}
Dec 08 09:49:29 compute-1 ceph-mon[79846]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370058728, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12119241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 7009, "table_properties": {"data_size": 12092336, "index_size": 17126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8709, "raw_key_size": 84650, "raw_average_key_size": 24, "raw_value_size": 12025309, "raw_average_value_size": 3465, "num_data_blocks": 755, "num_entries": 3470, "num_filter_entries": 3470, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187219, "oldest_key_time": 1765187219, "file_creation_time": 1765187369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80a3bbeb-1e70-44e2-b668-bf1fa77bc39c", "db_session_id": "N6WMD309NYTLX53YA9N0", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 106344 microseconds, and 28085 cpu microseconds.
Dec 08 09:49:30 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:30 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:30 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.058786) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12119241 bytes OK
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.058807) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.136491) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.136574) EVENT_LOG_v1 {"time_micros": 1765187370136524, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.136594) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 18979062, prev total WAL file size 18981632, number of live WAL files 2.
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.140498) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370140615, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12120889, "oldest_snapshot_seqno": -1}
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3218 keys, 12115797 bytes, temperature: kUnknown
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370351408, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12115797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12089503, "index_size": 17160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 81159, "raw_average_key_size": 25, "raw_value_size": 12025589, "raw_average_value_size": 3736, "num_data_blocks": 755, "num_entries": 3218, "num_filter_entries": 3218, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187219, "oldest_key_time": 0, "file_creation_time": 1765187370, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80a3bbeb-1e70-44e2-b668-bf1fa77bc39c", "db_session_id": "N6WMD309NYTLX53YA9N0", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 08 09:49:30 compute-1 ceph-mon[79846]: pgmap v59: 353 pgs: 17 active+remapped, 336 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 494 B/s, 0 keys/s, 3 objects/s recovering
Dec 08 09:49:30 compute-1 ceph-mon[79846]: 10.c scrub starts
Dec 08 09:49:30 compute-1 ceph-mon[79846]: 10.c scrub ok
Dec 08 09:49:30 compute-1 ceph-mon[79846]: 9.2 scrub starts
Dec 08 09:49:30 compute-1 ceph-mon[79846]: 9.2 scrub ok
Dec 08 09:49:30 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.351716) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12115797 bytes
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.353750) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.5 rd, 57.4 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.6, 0.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3475, records dropped: 257 output_compression: NoCompression
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.353779) EVENT_LOG_v1 {"time_micros": 1765187370353765, "job": 4, "event": "compaction_finished", "compaction_time_micros": 210915, "compaction_time_cpu_micros": 25527, "output_level": 6, "num_output_files": 1, "total_output_size": 12115797, "num_input_records": 3475, "num_output_records": 3218, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370357243, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370357317, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 08 09:49:30 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:30.140382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 08 09:49:30 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 08 09:49:30 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 08 09:49:30 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:30 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:30 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:30.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:30 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Dec 08 09:49:31 compute-1 ceph-mon[79846]: 10.a scrub starts
Dec 08 09:49:31 compute-1 ceph-mon[79846]: 10.a scrub ok
Dec 08 09:49:31 compute-1 ceph-mon[79846]: 10.18 scrub starts
Dec 08 09:49:31 compute-1 ceph-mon[79846]: 10.18 scrub ok
Dec 08 09:49:31 compute-1 ceph-mon[79846]: osdmap e64: 3 total, 3 up, 3 in
Dec 08 09:49:31 compute-1 ceph-mon[79846]: 12.17 scrub starts
Dec 08 09:49:31 compute-1 ceph-mon[79846]: 12.17 scrub ok
Dec 08 09:49:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:31 compute-1 ceph-mon[79846]: 10.9 scrub starts
Dec 08 09:49:31 compute-1 ceph-mon[79846]: 10.9 scrub ok
Dec 08 09:49:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 08 09:49:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 08 09:49:31 compute-1 ceph-mon[79846]: osdmap e65: 3 total, 3 up, 3 in
Dec 08 09:49:31 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:31 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:31 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:31 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.f scrub starts
Dec 08 09:49:31 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.f scrub ok
Dec 08 09:49:31 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:31 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:31 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:31 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:32 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:32 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 08 09:49:32 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:32.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 08 09:49:32 compute-1 ceph-mon[79846]: pgmap v62: 353 pgs: 17 active+remapped, 336 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 494 B/s, 0 keys/s, 3 objects/s recovering
Dec 08 09:49:32 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 08 09:49:32 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 08 09:49:32 compute-1 ceph-mon[79846]: Deploying daemon keepalived.rgw.default.compute-0.qvwaqs on compute-0
Dec 08 09:49:32 compute-1 ceph-mon[79846]: 10.5 scrub starts
Dec 08 09:49:32 compute-1 ceph-mon[79846]: 10.5 scrub ok
Dec 08 09:49:32 compute-1 ceph-mon[79846]: 12.11 deep-scrub starts
Dec 08 09:49:32 compute-1 ceph-mon[79846]: 12.11 deep-scrub ok
Dec 08 09:49:32 compute-1 ceph-mon[79846]: 12.f scrub starts
Dec 08 09:49:32 compute-1 ceph-mon[79846]: 12.f scrub ok
Dec 08 09:49:32 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Dec 08 09:49:32 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 08 09:49:32 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 08 09:49:32 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:32 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:32 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:33 compute-1 ceph-mon[79846]: 11.15 scrub starts
Dec 08 09:49:33 compute-1 ceph-mon[79846]: 11.15 scrub ok
Dec 08 09:49:33 compute-1 ceph-mon[79846]: 12.9 scrub starts
Dec 08 09:49:33 compute-1 ceph-mon[79846]: 12.9 scrub ok
Dec 08 09:49:33 compute-1 ceph-mon[79846]: osdmap e66: 3 total, 3 up, 3 in
Dec 08 09:49:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 08 09:49:33 compute-1 ceph-mon[79846]: 10.d scrub starts
Dec 08 09:49:33 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:33 compute-1 ceph-mon[79846]: 10.d scrub ok
Dec 08 09:49:33 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Dec 08 09:49:33 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:33 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:33 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [0] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:33 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [0] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:33 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [0] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:33 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [0] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:33 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 08 09:49:33 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 08 09:49:33 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:33 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:33 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:33 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:34 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:34 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:34 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:34.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:34 compute-1 ceph-mon[79846]: pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 170 B/s, 8 objects/s recovering
Dec 08 09:49:34 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 08 09:49:34 compute-1 ceph-mon[79846]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 08 09:49:34 compute-1 ceph-mon[79846]: Deploying daemon keepalived.rgw.default.compute-2.wajgbn on compute-2
Dec 08 09:49:34 compute-1 ceph-mon[79846]: 11.0 scrub starts
Dec 08 09:49:34 compute-1 ceph-mon[79846]: 11.0 scrub ok
Dec 08 09:49:34 compute-1 ceph-mon[79846]: 12.4 scrub starts
Dec 08 09:49:34 compute-1 ceph-mon[79846]: 12.4 scrub ok
Dec 08 09:49:34 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 08 09:49:34 compute-1 ceph-mon[79846]: osdmap e67: 3 total, 3 up, 3 in
Dec 08 09:49:34 compute-1 ceph-mon[79846]: 10.b scrub starts
Dec 08 09:49:34 compute-1 ceph-mon[79846]: 10.b scrub ok
Dec 08 09:49:34 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Dec 08 09:49:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:34 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.d scrub starts
Dec 08 09:49:34 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.d scrub ok
Dec 08 09:49:34 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:34 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:34 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:34 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:34.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:35 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:35 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:35 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Dec 08 09:49:35 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Dec 08 09:49:35 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:35 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:35 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:35 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:35 compute-1 ceph-mon[79846]: osdmap e68: 3 total, 3 up, 3 in
Dec 08 09:49:35 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 08 09:49:35 compute-1 ceph-mon[79846]: 12.d scrub starts
Dec 08 09:49:35 compute-1 ceph-mon[79846]: 12.d scrub ok
Dec 08 09:49:35 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Dec 08 09:49:35 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:35 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:35 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:35 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:36 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:36 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:36 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:36.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:36 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Dec 08 09:49:36 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Dec 08 09:49:36 compute-1 ceph-mon[79846]: pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 170 B/s, 8 objects/s recovering
Dec 08 09:49:36 compute-1 ceph-mon[79846]: Deploying daemon prometheus.compute-0 on compute-0
Dec 08 09:49:36 compute-1 ceph-mon[79846]: 8.a scrub starts
Dec 08 09:49:36 compute-1 ceph-mon[79846]: 8.a scrub ok
Dec 08 09:49:36 compute-1 ceph-mon[79846]: 12.5 scrub starts
Dec 08 09:49:36 compute-1 ceph-mon[79846]: 12.5 scrub ok
Dec 08 09:49:36 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 08 09:49:36 compute-1 ceph-mon[79846]: osdmap e69: 3 total, 3 up, 3 in
Dec 08 09:49:36 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:36 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 08 09:49:36 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:36 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 08 09:49:36 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:36.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 08 09:49:36 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Dec 08 09:49:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 70 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=4 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 70 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=4 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 70 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 70 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 70 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 70 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 70 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 70 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:37 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Dec 08 09:49:37 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Dec 08 09:49:37 compute-1 ceph-mon[79846]: 8.9 scrub starts
Dec 08 09:49:37 compute-1 ceph-mon[79846]: 8.9 scrub ok
Dec 08 09:49:37 compute-1 ceph-mon[79846]: pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:37 compute-1 ceph-mon[79846]: 12.0 scrub starts
Dec 08 09:49:37 compute-1 ceph-mon[79846]: 12.0 scrub ok
Dec 08 09:49:37 compute-1 ceph-mon[79846]: 9.9 scrub starts
Dec 08 09:49:37 compute-1 ceph-mon[79846]: 9.9 scrub ok
Dec 08 09:49:37 compute-1 ceph-mon[79846]: 10.f scrub starts
Dec 08 09:49:37 compute-1 ceph-mon[79846]: 10.f scrub ok
Dec 08 09:49:37 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 08 09:49:37 compute-1 ceph-mon[79846]: osdmap e70: 3 total, 3 up, 3 in
Dec 08 09:49:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:37 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Dec 08 09:49:37 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 71 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=70/71 n=5 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:37 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 71 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=70/71 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:37 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 71 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=70/71 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:37 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 71 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=70/71 n=4 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:37.980057) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187377980092, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 597, "num_deletes": 251, "total_data_size": 780002, "memory_usage": 792712, "flush_reason": "Manual Compaction"}
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187377985717, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 498371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7014, "largest_seqno": 7606, "table_properties": {"data_size": 495020, "index_size": 1195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8435, "raw_average_key_size": 19, "raw_value_size": 487951, "raw_average_value_size": 1153, "num_data_blocks": 53, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187370, "oldest_key_time": 1765187370, "file_creation_time": 1765187377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80a3bbeb-1e70-44e2-b668-bf1fa77bc39c", "db_session_id": "N6WMD309NYTLX53YA9N0", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 5917 microseconds, and 2527 cpu microseconds.
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:37.985955) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 498371 bytes OK
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:37.985992) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:37.988728) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:37.988747) EVENT_LOG_v1 {"time_micros": 1765187377988742, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:37.988762) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 776400, prev total WAL file size 776400, number of live WAL files 2.
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:37.989317) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(486KB)], [15(11MB)]
Dec 08 09:49:37 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187377989454, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12614168, "oldest_snapshot_seqno": -1}
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3121 keys, 11400306 bytes, temperature: kUnknown
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187378048414, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11400306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11375245, "index_size": 16184, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 80640, "raw_average_key_size": 25, "raw_value_size": 11313406, "raw_average_value_size": 3624, "num_data_blocks": 705, "num_entries": 3121, "num_filter_entries": 3121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187219, "oldest_key_time": 0, "file_creation_time": 1765187377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80a3bbeb-1e70-44e2-b668-bf1fa77bc39c", "db_session_id": "N6WMD309NYTLX53YA9N0", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:38.048737) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11400306 bytes
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:38.082468) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.7 rd, 193.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.6 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(48.2) write-amplify(22.9) OK, records in: 3641, records dropped: 520 output_compression: NoCompression
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:38.082507) EVENT_LOG_v1 {"time_micros": 1765187378082493, "job": 6, "event": "compaction_finished", "compaction_time_micros": 59032, "compaction_time_cpu_micros": 22219, "output_level": 6, "num_output_files": 1, "total_output_size": 11400306, "num_input_records": 3641, "num_output_records": 3121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187378082805, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187378085562, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:37.989255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:38.085728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:38.085737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:38.085739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:38.085741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 08 09:49:38 compute-1 ceph-mon[79846]: rocksdb: (Original Log Time 2025/12/08-09:49:38.085742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 08 09:49:38 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:38 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:38 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:38 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Dec 08 09:49:38 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Dec 08 09:49:38 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:38 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 08 09:49:38 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:38.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 08 09:49:38 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Dec 08 09:49:38 compute-1 ceph-mon[79846]: 10.6 deep-scrub starts
Dec 08 09:49:38 compute-1 ceph-mon[79846]: 10.6 deep-scrub ok
Dec 08 09:49:38 compute-1 ceph-mon[79846]: 9.c scrub starts
Dec 08 09:49:38 compute-1 ceph-mon[79846]: 9.c scrub ok
Dec 08 09:49:38 compute-1 ceph-mon[79846]: 10.4 scrub starts
Dec 08 09:49:38 compute-1 ceph-mon[79846]: 10.4 scrub ok
Dec 08 09:49:38 compute-1 ceph-mon[79846]: osdmap e71: 3 total, 3 up, 3 in
Dec 08 09:49:38 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 08 09:49:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:39 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 08 09:49:39 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 08 09:49:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:39 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:40 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Dec 08 09:49:40 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:40 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:40 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:40 compute-1 ceph-mon[79846]: pgmap v73: 353 pgs: 1 active+clean+scrubbing, 4 active+remapped, 348 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 10 objects/s recovering
Dec 08 09:49:40 compute-1 ceph-mon[79846]: 12.1f scrub starts
Dec 08 09:49:40 compute-1 ceph-mon[79846]: 12.1f scrub ok
Dec 08 09:49:40 compute-1 ceph-mon[79846]: 11.c scrub starts
Dec 08 09:49:40 compute-1 ceph-mon[79846]: 11.c scrub ok
Dec 08 09:49:40 compute-1 ceph-mon[79846]: 12.3 scrub starts
Dec 08 09:49:40 compute-1 ceph-mon[79846]: 12.3 scrub ok
Dec 08 09:49:40 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 08 09:49:40 compute-1 ceph-mon[79846]: osdmap e72: 3 total, 3 up, 3 in
Dec 08 09:49:40 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:40 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:40 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:40 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  1: '-n'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  2: 'mgr.compute-1.mmkaif'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  3: '-f'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  4: '--setuser'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  5: 'ceph'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  6: '--setgroup'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  7: 'ceph'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr respawn  8: '--default-log-to-file=false'
Dec 08 09:49:40 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Dec 08 09:49:40 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Dec 08 09:49:40 compute-1 sshd-session[83281]: Connection closed by 192.168.122.100 port 54590
Dec 08 09:49:40 compute-1 sshd-session[83262]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 08 09:49:40 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Dec 08 09:49:40 compute-1 systemd[1]: session-34.scope: Consumed 19.585s CPU time.
Dec 08 09:49:40 compute-1 systemd-logind[795]: Session 34 logged out. Waiting for processes to exit.
Dec 08 09:49:40 compute-1 systemd-logind[795]: Removed session 34.
Dec 08 09:49:40 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: ignoring --setuser ceph since I am not root
Dec 08 09:49:40 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: ignoring --setgroup ceph since I am not root
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: pidfile_write: ignore empty --pid-file
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'alerts'
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:49:40 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:40.816+0000 7f059ec17140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'balancer'
Dec 08 09:49:40 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:40 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:40 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:40.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:49:40 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'cephadm'
Dec 08 09:49:40 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:40.899+0000 7f059ec17140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 08 09:49:41 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Dec 08 09:49:41 compute-1 ceph-mon[79846]: 10.1a scrub starts
Dec 08 09:49:41 compute-1 ceph-mon[79846]: 10.1a scrub ok
Dec 08 09:49:41 compute-1 ceph-mon[79846]: 11.b scrub starts
Dec 08 09:49:41 compute-1 ceph-mon[79846]: 11.b scrub ok
Dec 08 09:49:41 compute-1 ceph-mon[79846]: 12.2 deep-scrub starts
Dec 08 09:49:41 compute-1 ceph-mon[79846]: 12.2 deep-scrub ok
Dec 08 09:49:41 compute-1 ceph-mon[79846]: osdmap e73: 3 total, 3 up, 3 in
Dec 08 09:49:41 compute-1 ceph-mon[79846]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 08 09:49:41 compute-1 ceph-mon[79846]: mgrmap e27: compute-0.kitiwu(active, since 90s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:49:41 compute-1 ceph-mon[79846]: osdmap e74: 3 total, 3 up, 3 in
Dec 08 09:49:41 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:41 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:41 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Dec 08 09:49:41 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'crash'
Dec 08 09:49:41 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Dec 08 09:49:41 compute-1 ceph-mgr[80153]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:49:41 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:41.753+0000 7f059ec17140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 08 09:49:41 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'dashboard'
Dec 08 09:49:41 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:41 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:41 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:41 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:42 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:42 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:42 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:42.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:42 compute-1 ceph-mon[79846]: 12.1b scrub starts
Dec 08 09:49:42 compute-1 ceph-mon[79846]: 12.1b scrub ok
Dec 08 09:49:42 compute-1 ceph-mon[79846]: 11.9 scrub starts
Dec 08 09:49:42 compute-1 ceph-mon[79846]: 11.9 scrub ok
Dec 08 09:49:42 compute-1 ceph-mon[79846]: 12.1d scrub starts
Dec 08 09:49:42 compute-1 ceph-mon[79846]: 12.1d scrub ok
Dec 08 09:49:42 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'devicehealth'
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:49:42 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:42.388+0000 7f059ec17140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'diskprediction_local'
Dec 08 09:49:42 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 08 09:49:42 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 08 09:49:42 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]:   from numpy import show_config as show_numpy_config
Dec 08 09:49:42 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:42.566+0000 7f059ec17140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'influx'
Dec 08 09:49:42 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Dec 08 09:49:42 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Dec 08 09:49:42 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:42.640+0000 7f059ec17140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'insights'
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'iostat'
Dec 08 09:49:42 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:42.784+0000 7f059ec17140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 08 09:49:42 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'k8sevents'
Dec 08 09:49:42 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:42 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:42 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:42.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:43 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'localpool'
Dec 08 09:49:43 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mds_autoscaler'
Dec 08 09:49:43 compute-1 ceph-mon[79846]: 12.16 scrub starts
Dec 08 09:49:43 compute-1 ceph-mon[79846]: 11.d scrub starts
Dec 08 09:49:43 compute-1 ceph-mon[79846]: 12.16 scrub ok
Dec 08 09:49:43 compute-1 ceph-mon[79846]: 11.d scrub ok
Dec 08 09:49:43 compute-1 ceph-mon[79846]: 12.1e scrub starts
Dec 08 09:49:43 compute-1 ceph-mon[79846]: 12.1e scrub ok
Dec 08 09:49:43 compute-1 ceph-mon[79846]: osdmap e75: 3 total, 3 up, 3 in
Dec 08 09:49:43 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Dec 08 09:49:43 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'mirroring'
Dec 08 09:49:43 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:43 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:43 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'nfs'
Dec 08 09:49:43 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Dec 08 09:49:43 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Dec 08 09:49:43 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:43.787+0000 7f059ec17140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:49:43 compute-1 ceph-mgr[80153]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 08 09:49:43 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'orchestrator'
Dec 08 09:49:43 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:43 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:43 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:43 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:43 compute-1 sshd-session[86213]: error: kex_exchange_identification: read: Connection timed out
Dec 08 09:49:43 compute-1 sshd-session[86213]: banner exchange: Connection from 120.48.123.76 port 38758: Connection timed out
Dec 08 09:49:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:44.008+0000 7f059ec17140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_perf_query'
Dec 08 09:49:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:44.089+0000 7f059ec17140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'osd_support'
Dec 08 09:49:44 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:44 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 08 09:49:44 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:44.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 08 09:49:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:44.163+0000 7f059ec17140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'pg_autoscaler'
Dec 08 09:49:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:44.243+0000 7f059ec17140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'progress'
Dec 08 09:49:44 compute-1 ceph-mon[79846]: 12.14 scrub starts
Dec 08 09:49:44 compute-1 ceph-mon[79846]: 12.14 scrub ok
Dec 08 09:49:44 compute-1 ceph-mon[79846]: 8.e scrub starts
Dec 08 09:49:44 compute-1 ceph-mon[79846]: 8.e scrub ok
Dec 08 09:49:44 compute-1 ceph-mon[79846]: 8.c scrub starts
Dec 08 09:49:44 compute-1 ceph-mon[79846]: 8.c scrub ok
Dec 08 09:49:44 compute-1 ceph-mon[79846]: osdmap e76: 3 total, 3 up, 3 in
Dec 08 09:49:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:44.315+0000 7f059ec17140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'prometheus'
Dec 08 09:49:44 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 08 09:49:44 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 08 09:49:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:44.699+0000 7f059ec17140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rbd_support'
Dec 08 09:49:44 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:44.800+0000 7f059ec17140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 08 09:49:44 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'restful'
Dec 08 09:49:44 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:44 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:44 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:44 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:44.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:45 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rgw'
Dec 08 09:49:45 compute-1 ceph-mon[79846]: 12.1 scrub starts
Dec 08 09:49:45 compute-1 ceph-mon[79846]: 11.2 scrub starts
Dec 08 09:49:45 compute-1 ceph-mon[79846]: 12.1 scrub ok
Dec 08 09:49:45 compute-1 ceph-mon[79846]: 11.2 scrub ok
Dec 08 09:49:45 compute-1 ceph-mon[79846]: 11.17 scrub starts
Dec 08 09:49:45 compute-1 ceph-mon[79846]: 11.17 scrub ok
Dec 08 09:49:45 compute-1 ceph-mon[79846]: 11.12 scrub starts
Dec 08 09:49:45 compute-1 ceph-mon[79846]: 11.12 scrub ok
Dec 08 09:49:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:45.304+0000 7f059ec17140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:49:45 compute-1 ceph-mgr[80153]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 08 09:49:45 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'rook'
Dec 08 09:49:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:45 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 08 09:49:45 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 08 09:49:45 compute-1 sshd-session[86252]: Received disconnect from 103.191.92.236 port 40520:11: Bye Bye [preauth]
Dec 08 09:49:45 compute-1 sshd-session[86252]: Disconnected from authenticating user root 103.191.92.236 port 40520 [preauth]
Dec 08 09:49:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:45.878+0000 7f059ec17140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:49:45 compute-1 ceph-mgr[80153]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 08 09:49:45 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'selftest'
Dec 08 09:49:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:45 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:45 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:45.945+0000 7f059ec17140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:49:45 compute-1 ceph-mgr[80153]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 08 09:49:45 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'snap_schedule'
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:46.021+0000 7f059ec17140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'stats'
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'status'
Dec 08 09:49:46 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:46 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:46 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:46.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:46.175+0000 7f059ec17140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telegraf'
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:46.246+0000 7f059ec17140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'telemetry'
Dec 08 09:49:46 compute-1 ceph-mon[79846]: 8.1 scrub starts
Dec 08 09:49:46 compute-1 ceph-mon[79846]: 8.1 scrub ok
Dec 08 09:49:46 compute-1 ceph-mon[79846]: 8.1f scrub starts
Dec 08 09:49:46 compute-1 ceph-mon[79846]: 8.1f scrub ok
Dec 08 09:49:46 compute-1 ceph-mon[79846]: 11.1 scrub starts
Dec 08 09:49:46 compute-1 ceph-mon[79846]: 11.1 scrub ok
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:46.397+0000 7f059ec17140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'test_orchestrator'
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:46.608+0000 7f059ec17140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'volumes'
Dec 08 09:49:46 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 08 09:49:46 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:46.873+0000 7f059ec17140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Loading python module 'zabbix'
Dec 08 09:49:46 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:46 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:46 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:46.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 2025-12-08T09:49:46.964+0000 7f059ec17140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr load Constructed class from module: dashboard
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: mgr load Constructed class from module: prometheus
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: [prometheus INFO root] Starting engine...
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: [08/Dec/2025:09:49:46] ENGINE Bus STARTING
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: [dashboard INFO root] Starting engine...
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:49:46] ENGINE Bus STARTING
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: CherryPy Checker:
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: The Application mounted at '' has an empty config.
Dec 08 09:49:46 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: 
Dec 08 09:49:46 compute-1 ceph-mgr[80153]: ms_deliver_dispatch: unhandled message 0x561d79029860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 08 09:49:46 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Dec 08 09:49:47 compute-1 ceph-mgr[80153]: [dashboard INFO root] Engine started...
Dec 08 09:49:47 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: [08/Dec/2025:09:49:47] ENGINE Serving on http://:::9283
Dec 08 09:49:47 compute-1 ceph-mgr[80153]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:49:47] ENGINE Serving on http://:::9283
Dec 08 09:49:47 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-1-mmkaif[80149]: [08/Dec/2025:09:49:47] ENGINE Bus STARTED
Dec 08 09:49:47 compute-1 ceph-mgr[80153]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:49:47] ENGINE Bus STARTED
Dec 08 09:49:47 compute-1 ceph-mgr[80153]: [prometheus INFO root] Engine started.
Dec 08 09:49:47 compute-1 ceph-mon[79846]: 8.0 scrub starts
Dec 08 09:49:47 compute-1 ceph-mon[79846]: 8.0 scrub ok
Dec 08 09:49:47 compute-1 ceph-mon[79846]: 11.19 scrub starts
Dec 08 09:49:47 compute-1 ceph-mon[79846]: 11.19 scrub ok
Dec 08 09:49:47 compute-1 ceph-mon[79846]: 11.f scrub starts
Dec 08 09:49:47 compute-1 ceph-mon[79846]: 11.f scrub ok
Dec 08 09:49:47 compute-1 ceph-mon[79846]: Active manager daemon compute-0.kitiwu restarted
Dec 08 09:49:47 compute-1 ceph-mon[79846]: Activating manager daemon compute-0.kitiwu
Dec 08 09:49:47 compute-1 ceph-mon[79846]: osdmap e77: 3 total, 3 up, 3 in
Dec 08 09:49:47 compute-1 ceph-mon[79846]: mgrmap e28: compute-0.kitiwu(active, starting, since 0.0452981s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ywanut"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.tjxjxt"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.hhmzvb"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif restarted
Dec 08 09:49:47 compute-1 ceph-mon[79846]: Standby manager daemon compute-1.mmkaif started
Dec 08 09:49:47 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv restarted
Dec 08 09:49:47 compute-1 ceph-mon[79846]: Standby manager daemon compute-2.zqytsv started
Dec 08 09:49:47 compute-1 ceph-mon[79846]: Manager daemon compute-0.kitiwu is now available
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 08 09:49:47 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec 08 09:49:47 compute-1 sshd-session[86279]: Accepted publickey for ceph-admin from 192.168.122.100 port 56338 ssh2: RSA SHA256:8l9QqAG/R36mxhB8hMA3YKcV0YDWbaRrGefHfTlg3OU
Dec 08 09:49:47 compute-1 systemd-logind[795]: New session 36 of user ceph-admin.
Dec 08 09:49:47 compute-1 systemd[1]: Started Session 36 of User ceph-admin.
Dec 08 09:49:47 compute-1 sshd-session[86279]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 08 09:49:47 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:47 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e640036e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:47 compute-1 sudo[86284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:49:47 compute-1 sudo[86284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:47 compute-1 sudo[86284]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:47 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 08 09:49:47 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 08 09:49:47 compute-1 sudo[86309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 08 09:49:47 compute-1 sudo[86309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:47 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:47 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:47 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:47 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:48 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:48 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:48 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:48.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:48 compute-1 ceph-mon[79846]: 8.7 deep-scrub starts
Dec 08 09:49:48 compute-1 ceph-mon[79846]: 8.7 deep-scrub ok
Dec 08 09:49:48 compute-1 ceph-mon[79846]: 8.5 scrub starts
Dec 08 09:49:48 compute-1 ceph-mon[79846]: 8.5 scrub ok
Dec 08 09:49:48 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec 08 09:49:48 compute-1 ceph-mon[79846]: 11.5 scrub starts
Dec 08 09:49:48 compute-1 ceph-mon[79846]: 11.5 scrub ok
Dec 08 09:49:48 compute-1 ceph-mon[79846]: mgrmap e29: compute-0.kitiwu(active, since 1.06894s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:49:48 compute-1 podman[86405]: 2025-12-08 09:49:48.379454298 +0000 UTC m=+0.057863863 container exec 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 08 09:49:48 compute-1 podman[86405]: 2025-12-08 09:49:48.533353983 +0000 UTC m=+0.211763518 container exec_died 0b1ceffabe238cda53122741c28a5b5e679f0efe3fbc9966719b6787d3af3102 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 08 09:49:48 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 08 09:49:48 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 08 09:49:48 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:48 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:48 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:48.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:49 compute-1 podman[86542]: 2025-12-08 09:49:49.128855605 +0000 UTC m=+0.051214860 container exec 2fda3b355cd40eb61e8d8918a072c9229da4506f505876d6ee0a23fb8c342813 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 08 09:49:49 compute-1 podman[86542]: 2025-12-08 09:49:49.162581045 +0000 UTC m=+0.084940260 container exec_died 2fda3b355cd40eb61e8d8918a072c9229da4506f505876d6ee0a23fb8c342813 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 08 09:49:49 compute-1 ceph-mon[79846]: 11.6 scrub starts
Dec 08 09:49:49 compute-1 ceph-mon[79846]: 11.6 scrub ok
Dec 08 09:49:49 compute-1 ceph-mon[79846]: 11.3 scrub starts
Dec 08 09:49:49 compute-1 ceph-mon[79846]: 11.3 scrub ok
Dec 08 09:49:49 compute-1 ceph-mon[79846]: [08/Dec/2025:09:49:48] ENGINE Bus STARTING
Dec 08 09:49:49 compute-1 ceph-mon[79846]: [08/Dec/2025:09:49:48] ENGINE Serving on https://192.168.122.100:7150
Dec 08 09:49:49 compute-1 ceph-mon[79846]: [08/Dec/2025:09:49:48] ENGINE Client ('192.168.122.100', 34334) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 08 09:49:49 compute-1 ceph-mon[79846]: [08/Dec/2025:09:49:48] ENGINE Serving on http://192.168.122.100:8765
Dec 08 09:49:49 compute-1 ceph-mon[79846]: [08/Dec/2025:09:49:48] ENGINE Bus STARTED
Dec 08 09:49:49 compute-1 ceph-mon[79846]: 8.8 scrub starts
Dec 08 09:49:49 compute-1 ceph-mon[79846]: 8.8 scrub ok
Dec 08 09:49:49 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 08 09:49:49 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Dec 08 09:49:49 compute-1 podman[86613]: 2025-12-08 09:49:49.467298354 +0000 UTC m=+0.071914791 container exec 23497d82ee9ea70335fca0f3c4309147d81269293469bcfee1aca51ee00d5dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:49:49 compute-1 podman[86613]: 2025-12-08 09:49:49.483481665 +0000 UTC m=+0.088098122 container exec_died 23497d82ee9ea70335fca0f3c4309147d81269293469bcfee1aca51ee00d5dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 08 09:49:49 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:49 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:49 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 08 09:49:49 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 08 09:49:49 compute-1 podman[86677]: 2025-12-08 09:49:49.747979474 +0000 UTC m=+0.062431866 container exec b9fd0246c48061305c7811136c6a39b092dbdac0fc6cb0fd31313ce10b304fdc (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw)
Dec 08 09:49:49 compute-1 podman[86677]: 2025-12-08 09:49:49.76123923 +0000 UTC m=+0.075691622 container exec_died b9fd0246c48061305c7811136c6a39b092dbdac0fc6cb0fd31313ce10b304fdc (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw)
Dec 08 09:49:49 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:49 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:49 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:49 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:49 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:50 compute-1 podman[86744]: 2025-12-08 09:49:50.04471322 +0000 UTC m=+0.073919420 container exec ef47fc5b71b1e6ae6538605cddf4e1fdf4707b8b994d55ceabe6b66724d9d061 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20)
Dec 08 09:49:50 compute-1 podman[86744]: 2025-12-08 09:49:50.067461801 +0000 UTC m=+0.096667971 container exec_died ef47fc5b71b1e6ae6538605cddf4e1fdf4707b8b994d55ceabe6b66724d9d061 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-1-khfxdl, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.tags=Ceph keepalived, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, description=keepalived for Ceph)
Dec 08 09:49:50 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:50 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:50 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:50 compute-1 sudo[86309]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:50 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=78) [0] r=0 lpr=78 pi=[52,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:50 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=78) [0] r=0 lpr=78 pi=[52,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:50 compute-1 sudo[86779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:49:50 compute-1 sudo[86779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:50 compute-1 sudo[86779]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:50 compute-1 sudo[86804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 08 09:49:50 compute-1 sudo[86804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:50 compute-1 ceph-mon[79846]: 11.18 scrub starts
Dec 08 09:49:50 compute-1 ceph-mon[79846]: 11.18 scrub ok
Dec 08 09:49:50 compute-1 ceph-mon[79846]: pgmap v4: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:50 compute-1 ceph-mon[79846]: 11.8 scrub starts
Dec 08 09:49:50 compute-1 ceph-mon[79846]: 11.8 scrub ok
Dec 08 09:49:50 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 08 09:49:50 compute-1 ceph-mon[79846]: osdmap e78: 3 total, 3 up, 3 in
Dec 08 09:49:50 compute-1 ceph-mon[79846]: mgrmap e30: compute-0.kitiwu(active, since 2s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:49:50 compute-1 ceph-mon[79846]: 11.7 scrub starts
Dec 08 09:49:50 compute-1 ceph-mon[79846]: 11.7 scrub ok
Dec 08 09:49:50 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:50 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:50 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:50 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:50 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 08 09:49:50 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 08 09:49:50 compute-1 sudo[86804]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:50 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:50 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:50 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:50.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:50 compute-1 sudo[86859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:49:50 compute-1 sudo[86859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:50 compute-1 sudo[86859]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:51 compute-1 sudo[86884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 08 09:49:51 compute-1 sudo[86884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:51 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Dec 08 09:49:51 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 79 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] r=-1 lpr=79 pi=[52,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:51 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 79 pg[9.a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] r=-1 lpr=79 pi=[52,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:51 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 79 pg[9.a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] r=-1 lpr=79 pi=[52,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:51 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 79 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] r=-1 lpr=79 pi=[52,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:51 compute-1 sudo[86884]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:51 compute-1 ceph-mon[79846]: 8.1a scrub starts
Dec 08 09:49:51 compute-1 ceph-mon[79846]: 8.1a scrub ok
Dec 08 09:49:51 compute-1 ceph-mon[79846]: 11.16 scrub starts
Dec 08 09:49:51 compute-1 ceph-mon[79846]: 11.16 scrub ok
Dec 08 09:49:51 compute-1 ceph-mon[79846]: 8.1e scrub starts
Dec 08 09:49:51 compute-1 ceph-mon[79846]: 8.1e scrub ok
Dec 08 09:49:51 compute-1 ceph-mon[79846]: 8.1b scrub starts
Dec 08 09:49:51 compute-1 ceph-mon[79846]: 8.1b scrub ok
Dec 08 09:49:51 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 08 09:49:51 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 08 09:49:51 compute-1 ceph-mon[79846]: osdmap e79: 3 total, 3 up, 3 in
Dec 08 09:49:51 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:51 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:51 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:51 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:51 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 08 09:49:51 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:51 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:51 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 08 09:49:51 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 08 09:49:51 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:51 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:51 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:51 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e64004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:52 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:52 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:52 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:52.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:52 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Dec 08 09:49:52 compute-1 ceph-mon[79846]: pgmap v6: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:52 compute-1 ceph-mon[79846]: 8.b scrub starts
Dec 08 09:49:52 compute-1 ceph-mon[79846]: 8.b scrub ok
Dec 08 09:49:52 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:52 compute-1 ceph-mon[79846]: mgrmap e31: compute-0.kitiwu(active, since 4s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec 08 09:49:52 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:52 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 08 09:49:52 compute-1 ceph-mon[79846]: 8.1d scrub starts
Dec 08 09:49:52 compute-1 ceph-mon[79846]: 8.1d scrub ok
Dec 08 09:49:52 compute-1 ceph-mon[79846]: 11.14 scrub starts
Dec 08 09:49:52 compute-1 ceph-mon[79846]: 11.14 scrub ok
Dec 08 09:49:52 compute-1 ceph-mon[79846]: osdmap e80: 3 total, 3 up, 3 in
Dec 08 09:49:52 compute-1 sudo[86928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:49:52 compute-1 sudo[86928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:52 compute-1 sudo[86928]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:52 compute-1 sudo[86953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:49:52 compute-1 sudo[86953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:52 compute-1 sudo[86953]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:52 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 08 09:49:52 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 08 09:49:52 compute-1 sudo[86978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:49:52 compute-1 sudo[86978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:52 compute-1 sudo[86978]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:52 compute-1 sudo[87003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:49:52 compute-1 sudo[87003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:52 compute-1 sudo[87003]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:52 compute-1 sudo[87028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:49:52 compute-1 sudo[87028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:52 compute-1 sudo[87028]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:52 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:52 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:52 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:52.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:52 compute-1 sudo[87076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:49:52 compute-1 sudo[87076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87076]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 sudo[87101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new
Dec 08 09:49:53 compute-1 sudo[87101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87101]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 sudo[87126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 08 09:49:53 compute-1 sudo[87126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87126]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Dec 08 09:49:53 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 81 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81) [0] r=0 lpr=81 pi=[52,81)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:53 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 81 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81) [0] r=0 lpr=81 pi=[52,81)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:53 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 81 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81) [0] r=0 lpr=81 pi=[52,81)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:53 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 81 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81) [0] r=0 lpr=81 pi=[52,81)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:53 compute-1 sudo[87151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:49:53 compute-1 sudo[87151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87151]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 sudo[87176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:49:53 compute-1 sudo[87176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87176]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 sudo[87201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:49:53 compute-1 sudo[87201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87201]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 sudo[87226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:49:53 compute-1 sudo[87226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87226]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 sudo[87251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:49:53 compute-1 sudo[87251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87251]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 ceph-mon[79846]: 11.13 scrub starts
Dec 08 09:49:53 compute-1 ceph-mon[79846]: 11.13 scrub ok
Dec 08 09:49:53 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:53 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:53 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 08 09:49:53 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:49:53 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 08 09:49:53 compute-1 ceph-mon[79846]: Updating compute-0:/etc/ceph/ceph.conf
Dec 08 09:49:53 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.conf
Dec 08 09:49:53 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.conf
Dec 08 09:49:53 compute-1 ceph-mon[79846]: 11.1f scrub starts
Dec 08 09:49:53 compute-1 ceph-mon[79846]: 11.1f scrub ok
Dec 08 09:49:53 compute-1 ceph-mon[79846]: 11.1c scrub starts
Dec 08 09:49:53 compute-1 ceph-mon[79846]: 11.1c scrub ok
Dec 08 09:49:53 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 08 09:49:53 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 08 09:49:53 compute-1 ceph-mon[79846]: osdmap e81: 3 total, 3 up, 3 in
Dec 08 09:49:53 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:53 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:53 compute-1 sudo[87299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:49:53 compute-1 sudo[87299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87299]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 sudo[87327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new
Dec 08 09:49:53 compute-1 sudo[87327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87327]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 08 09:49:53 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 08 09:49:53 compute-1 sudo[87353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:49:53 compute-1 sudo[87353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87353]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 sudo[87378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 08 09:49:53 compute-1 sudo[87378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87378]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:53 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:53 compute-1 sudo[87403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph
Dec 08 09:49:53 compute-1 sudo[87403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87403]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:53 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:53 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78001ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:53 compute-1 sudo[87428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:49:53 compute-1 sudo[87428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:53 compute-1 sudo[87428]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 sudo[87453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:49:54 compute-1 sudo[87453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87453]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 sudo[87478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:49:54 compute-1 sudo[87478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87478]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:54 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:54 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:54.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:54 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Dec 08 09:49:54 compute-1 sudo[87526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:49:54 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 82 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=81/82 n=6 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81) [0] r=0 lpr=81 pi=[52,81)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:54 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 82 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=81/82 n=5 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81) [0] r=0 lpr=81 pi=[52,81)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:49:54 compute-1 sudo[87526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87526]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 sudo[87551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new
Dec 08 09:49:54 compute-1 sudo[87551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87551]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 sudo[87576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 08 09:49:54 compute-1 sudo[87576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87576]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 sudo[87601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:49:54 compute-1 sudo[87601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87601]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 ceph-mon[79846]: pgmap v9: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:49:54 compute-1 ceph-mon[79846]: 12.13 scrub starts
Dec 08 09:49:54 compute-1 ceph-mon[79846]: 12.13 scrub ok
Dec 08 09:49:54 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:49:54 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:49:54 compute-1 ceph-mon[79846]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec 08 09:49:54 compute-1 ceph-mon[79846]: 11.1d scrub starts
Dec 08 09:49:54 compute-1 ceph-mon[79846]: 11.1d scrub ok
Dec 08 09:49:54 compute-1 ceph-mon[79846]: 11.10 scrub starts
Dec 08 09:49:54 compute-1 ceph-mon[79846]: 11.10 scrub ok
Dec 08 09:49:54 compute-1 ceph-mon[79846]: osdmap e82: 3 total, 3 up, 3 in
Dec 08 09:49:54 compute-1 sudo[87626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config
Dec 08 09:49:54 compute-1 sudo[87626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87626]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 sudo[87651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:49:54 compute-1 sudo[87651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87651]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 08 09:49:54 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 08 09:49:54 compute-1 sudo[87676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:49:54 compute-1 sudo[87676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87676]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 sudo[87701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:49:54 compute-1 sudo[87701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87701]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:54 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:54 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:54 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:54.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:54 compute-1 sudo[87749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:49:54 compute-1 sudo[87749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87749]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:54 compute-1 sudo[87774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new
Dec 08 09:49:54 compute-1 sudo[87774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:54 compute-1 sudo[87774]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:55 compute-1 sudo[87799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring.new /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:49:55 compute-1 sudo[87799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:49:55 compute-1 sudo[87799]: pam_unix(sudo:session): session closed for user root
Dec 08 09:49:55 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Dec 08 09:49:55 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:55 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:55 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 83 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=83) [0] r=0 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:55 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 83 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=83) [0] r=0 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:55 compute-1 ceph-mon[79846]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:49:55 compute-1 ceph-mon[79846]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:49:55 compute-1 ceph-mon[79846]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 08 09:49:55 compute-1 ceph-mon[79846]: 8.d deep-scrub starts
Dec 08 09:49:55 compute-1 ceph-mon[79846]: 8.d deep-scrub ok
Dec 08 09:49:55 compute-1 ceph-mon[79846]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:49:55 compute-1 ceph-mon[79846]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:49:55 compute-1 ceph-mon[79846]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec 08 09:49:55 compute-1 ceph-mon[79846]: 11.1b scrub starts
Dec 08 09:49:55 compute-1 ceph-mon[79846]: 11.11 scrub starts
Dec 08 09:49:55 compute-1 ceph-mon[79846]: 11.1b scrub ok
Dec 08 09:49:55 compute-1 ceph-mon[79846]: 11.11 scrub ok
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 08 09:49:55 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:49:55 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Dec 08 09:49:55 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Dec 08 09:49:55 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:55 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:55 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:55 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:56 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:56 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:56 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:56.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:56 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Dec 08 09:49:56 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 84 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[68,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:56 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 84 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[68,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:56 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 84 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[68,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:56 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 84 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[68,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:49:56 compute-1 ceph-mon[79846]: pgmap v12: 353 pgs: 1 active+clean+scrubbing, 2 active+remapped, 350 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 14 op/s; 54 B/s, 3 objects/s recovering
Dec 08 09:49:56 compute-1 ceph-mon[79846]: 11.e scrub starts
Dec 08 09:49:56 compute-1 ceph-mon[79846]: 11.e scrub ok
Dec 08 09:49:56 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 08 09:49:56 compute-1 ceph-mon[79846]: osdmap e83: 3 total, 3 up, 3 in
Dec 08 09:49:56 compute-1 ceph-mon[79846]: 8.13 scrub starts
Dec 08 09:49:56 compute-1 ceph-mon[79846]: 8.13 scrub ok
Dec 08 09:49:56 compute-1 ceph-mon[79846]: 8.18 deep-scrub starts
Dec 08 09:49:56 compute-1 ceph-mon[79846]: 8.18 deep-scrub ok
Dec 08 09:49:56 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 08 09:49:56 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 08 09:49:56 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:56 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:56 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:56.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:57 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:57 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78001ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:57 compute-1 ceph-mon[79846]: 8.2 scrub starts
Dec 08 09:49:57 compute-1 ceph-mon[79846]: 8.2 scrub ok
Dec 08 09:49:57 compute-1 ceph-mon[79846]: osdmap e84: 3 total, 3 up, 3 in
Dec 08 09:49:57 compute-1 ceph-mon[79846]: 12.19 scrub starts
Dec 08 09:49:57 compute-1 ceph-mon[79846]: 8.4 scrub starts
Dec 08 09:49:57 compute-1 ceph-mon[79846]: 12.19 scrub ok
Dec 08 09:49:57 compute-1 ceph-mon[79846]: 8.4 scrub ok
Dec 08 09:49:57 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 08 09:49:57 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Dec 08 09:49:57 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:57 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:57 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:57 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:58 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:58 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:49:58 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:58.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:49:58 compute-1 ceph-mon[79846]: pgmap v15: 353 pgs: 1 active+clean+scrubbing, 2 active+remapped, 350 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 14 op/s; 54 B/s, 3 objects/s recovering
Dec 08 09:49:58 compute-1 ceph-mon[79846]: 8.1c scrub starts
Dec 08 09:49:58 compute-1 ceph-mon[79846]: 8.1c scrub ok
Dec 08 09:49:58 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 08 09:49:58 compute-1 ceph-mon[79846]: osdmap e85: 3 total, 3 up, 3 in
Dec 08 09:49:58 compute-1 ceph-mon[79846]: 12.8 scrub starts
Dec 08 09:49:58 compute-1 ceph-mon[79846]: 12.8 scrub ok
Dec 08 09:49:58 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Dec 08 09:49:58 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 86 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=84/68 les/c/f=85/69/0 sis=86) [0] r=0 lpr=86 pi=[68,86)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:58 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 86 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=84/68 les/c/f=85/69/0 sis=86) [0] r=0 lpr=86 pi=[68,86)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:49:58 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 86 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=84/68 les/c/f=85/69/0 sis=86) [0] r=0 lpr=86 pi=[68,86)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:58 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 86 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=84/68 les/c/f=85/69/0 sis=86) [0] r=0 lpr=86 pi=[68,86)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:49:58 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:49:58 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:49:58 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:58.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:49:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:59 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:59 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:49:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:59 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:49:59 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:49:59 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:00 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:00 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:00 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:00 compute-1 ceph-mon[79846]: 11.a scrub starts
Dec 08 09:50:00 compute-1 ceph-mon[79846]: 11.a scrub ok
Dec 08 09:50:00 compute-1 ceph-mon[79846]: osdmap e86: 3 total, 3 up, 3 in
Dec 08 09:50:00 compute-1 ceph-mon[79846]: 12.a scrub starts
Dec 08 09:50:00 compute-1 ceph-mon[79846]: 12.a scrub ok
Dec 08 09:50:00 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 08 09:50:00 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Dec 08 09:50:00 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=87) [0] r=0 lpr=87 pi=[64,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:00 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=87) [0] r=0 lpr=87 pi=[64,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:00 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 87 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=86/87 n=5 ec=52/36 lis/c=84/68 les/c/f=85/69/0 sis=86) [0] r=0 lpr=86 pi=[68,86)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:00 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 87 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=86/87 n=6 ec=52/36 lis/c=84/68 les/c/f=85/69/0 sis=86) [0] r=0 lpr=86 pi=[68,86)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:00 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 08 09:50:00 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 08 09:50:00 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw[85819]: [WARNING] 341/095000 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 08 09:50:00 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:00 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:00 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:00.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:01 compute-1 ceph-mon[79846]: pgmap v18: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 3 objects/s recovering
Dec 08 09:50:01 compute-1 ceph-mon[79846]: 12.18 deep-scrub starts
Dec 08 09:50:01 compute-1 ceph-mon[79846]: 12.18 deep-scrub ok
Dec 08 09:50:01 compute-1 ceph-mon[79846]: 12.e scrub starts
Dec 08 09:50:01 compute-1 ceph-mon[79846]: 12.e scrub ok
Dec 08 09:50:01 compute-1 ceph-mon[79846]: overall HEALTH_OK
Dec 08 09:50:01 compute-1 ceph-mon[79846]: 12.7 scrub starts
Dec 08 09:50:01 compute-1 ceph-mon[79846]: 12.7 scrub ok
Dec 08 09:50:01 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 08 09:50:01 compute-1 ceph-mon[79846]: osdmap e87: 3 total, 3 up, 3 in
Dec 08 09:50:01 compute-1 ceph-mon[79846]: 9.a scrub starts
Dec 08 09:50:01 compute-1 ceph-mon[79846]: 9.a scrub ok
Dec 08 09:50:01 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 08 09:50:01 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Dec 08 09:50:01 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 88 pg[9.10( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=88) [0] r=0 lpr=88 pi=[52,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:01 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 88 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:01 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 88 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:01 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 88 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:01 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 88 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:01 compute-1 sudo[87827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 08 09:50:01 compute-1 sudo[87827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:01 compute-1 sudo[87827]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:01 compute-1 sudo[87852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 08 09:50:01 compute-1 sudo[87852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:01 compute-1 sudo[87852]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:01 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:01 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 08 09:50:01 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 08 09:50:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:01 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:01 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:01 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:02 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:02 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:02 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:02 compute-1 ceph-mon[79846]: 12.c deep-scrub starts
Dec 08 09:50:02 compute-1 ceph-mon[79846]: 12.c deep-scrub ok
Dec 08 09:50:02 compute-1 ceph-mon[79846]: pgmap v20: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 2 objects/s recovering
Dec 08 09:50:02 compute-1 ceph-mon[79846]: 12.1a scrub starts
Dec 08 09:50:02 compute-1 ceph-mon[79846]: 12.1a scrub ok
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 08 09:50:02 compute-1 ceph-mon[79846]: osdmap e88: 3 total, 3 up, 3 in
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:02 compute-1 ceph-mon[79846]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:02 compute-1 ceph-mon[79846]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 08 09:50:02 compute-1 ceph-mon[79846]: 9.d scrub starts
Dec 08 09:50:02 compute-1 ceph-mon[79846]: 9.d scrub ok
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:02 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:02 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Dec 08 09:50:02 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 89 pg[9.10( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[52,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:02 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 89 pg[9.10( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[52,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:02 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 08 09:50:02 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 08 09:50:02 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:02 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 08 09:50:02 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:02.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 08 09:50:03 compute-1 ceph-mon[79846]: 12.6 scrub starts
Dec 08 09:50:03 compute-1 ceph-mon[79846]: 12.6 scrub ok
Dec 08 09:50:03 compute-1 ceph-mon[79846]: 10.1 scrub starts
Dec 08 09:50:03 compute-1 ceph-mon[79846]: 10.1 scrub ok
Dec 08 09:50:03 compute-1 ceph-mon[79846]: Reconfiguring mgr.compute-0.kitiwu (monmap changed)...
Dec 08 09:50:03 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kitiwu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 08 09:50:03 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 08 09:50:03 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:03 compute-1 ceph-mon[79846]: Reconfiguring daemon mgr.compute-0.kitiwu on compute-0
Dec 08 09:50:03 compute-1 ceph-mon[79846]: osdmap e89: 3 total, 3 up, 3 in
Dec 08 09:50:03 compute-1 ceph-mon[79846]: 9.e scrub starts
Dec 08 09:50:03 compute-1 ceph-mon[79846]: 9.e scrub ok
Dec 08 09:50:03 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:03 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:03 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 08 09:50:03 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:03 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 08 09:50:03 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Dec 08 09:50:03 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 90 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=88/64 les/c/f=89/65/0 sis=90) [0] r=0 lpr=90 pi=[64,90)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:03 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 90 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=88/64 les/c/f=89/65/0 sis=90) [0] r=0 lpr=90 pi=[64,90)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:03 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 90 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=88/64 les/c/f=89/65/0 sis=90) [0] r=0 lpr=90 pi=[64,90)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:03 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 90 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=88/64 les/c/f=89/65/0 sis=90) [0] r=0 lpr=90 pi=[64,90)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:03 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 90 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=90) [0] r=0 lpr=90 pi=[52,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:03 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c001f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:03 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 08 09:50:03 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 08 09:50:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:03 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e88009ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:03 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:03 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:04 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:04 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:04 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:04.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:04 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Dec 08 09:50:04 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 91 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=2 ec=52/36 lis/c=89/52 les/c/f=90/53/0 sis=91) [0] r=0 lpr=91 pi=[52,91)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:04 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 91 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=2 ec=52/36 lis/c=89/52 les/c/f=90/53/0 sis=91) [0] r=0 lpr=91 pi=[52,91)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:04 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 91 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[52,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:04 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 91 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[52,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:04 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 91 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=90/91 n=6 ec=52/36 lis/c=88/64 les/c/f=89/65/0 sis=90) [0] r=0 lpr=90 pi=[64,90)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:04 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 91 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=90/91 n=5 ec=52/36 lis/c=88/64 les/c/f=89/65/0 sis=90) [0] r=0 lpr=90 pi=[64,90)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:04 compute-1 ceph-mon[79846]: 12.b scrub starts
Dec 08 09:50:04 compute-1 ceph-mon[79846]: 12.b scrub ok
Dec 08 09:50:04 compute-1 ceph-mon[79846]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 08 09:50:04 compute-1 ceph-mon[79846]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 08 09:50:04 compute-1 ceph-mon[79846]: 10.1e scrub starts
Dec 08 09:50:04 compute-1 ceph-mon[79846]: pgmap v23: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:04 compute-1 ceph-mon[79846]: 10.1e scrub ok
Dec 08 09:50:04 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 08 09:50:04 compute-1 ceph-mon[79846]: osdmap e90: 3 total, 3 up, 3 in
Dec 08 09:50:04 compute-1 ceph-mon[79846]: 8.12 scrub starts
Dec 08 09:50:04 compute-1 ceph-mon[79846]: 8.12 scrub ok
Dec 08 09:50:04 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:04 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:04 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 08 09:50:04 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:04 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 08 09:50:04 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 08 09:50:04 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:50:04 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:04 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:04 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:04.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:05 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:05 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:05 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Dec 08 09:50:05 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 92 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=91/92 n=2 ec=52/36 lis/c=89/52 les/c/f=90/53/0 sis=91) [0] r=0 lpr=91 pi=[52,91)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:05 compute-1 ceph-mon[79846]: 12.12 scrub starts
Dec 08 09:50:05 compute-1 ceph-mon[79846]: 12.12 scrub ok
Dec 08 09:50:05 compute-1 ceph-mon[79846]: Reconfiguring osd.1 (monmap changed)...
Dec 08 09:50:05 compute-1 ceph-mon[79846]: Reconfiguring daemon osd.1 on compute-0
Dec 08 09:50:05 compute-1 ceph-mon[79846]: 10.11 scrub starts
Dec 08 09:50:05 compute-1 ceph-mon[79846]: 10.11 scrub ok
Dec 08 09:50:05 compute-1 ceph-mon[79846]: osdmap e91: 3 total, 3 up, 3 in
Dec 08 09:50:05 compute-1 ceph-mon[79846]: 11.1e scrub starts
Dec 08 09:50:05 compute-1 ceph-mon[79846]: 11.1e scrub ok
Dec 08 09:50:05 compute-1 ceph-mon[79846]: 12.1c scrub starts
Dec 08 09:50:05 compute-1 ceph-mon[79846]: 12.1c scrub ok
Dec 08 09:50:05 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:05 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:05 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 08 09:50:05 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:05 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:05 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:05 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:05 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:05 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:05 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:06 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:06 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:06 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:06.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:06 compute-1 ceph-mon[79846]: Reconfiguring rgw.rgw.compute-0.slkrtm (unknown last config time)...
Dec 08 09:50:06 compute-1 ceph-mon[79846]: Reconfiguring daemon rgw.rgw.compute-0.slkrtm on compute-0
Dec 08 09:50:06 compute-1 ceph-mon[79846]: 10.3 scrub starts
Dec 08 09:50:06 compute-1 ceph-mon[79846]: 10.3 scrub ok
Dec 08 09:50:06 compute-1 ceph-mon[79846]: pgmap v26: 353 pgs: 1 remapped+peering, 2 active+remapped, 350 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:06 compute-1 ceph-mon[79846]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec 08 09:50:06 compute-1 ceph-mon[79846]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec 08 09:50:06 compute-1 ceph-mon[79846]: osdmap e92: 3 total, 3 up, 3 in
Dec 08 09:50:06 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Dec 08 09:50:06 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 93 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=91/52 les/c/f=92/53/0 sis=93) [0] r=0 lpr=93 pi=[52,93)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:06 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 93 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=5 ec=52/36 lis/c=91/52 les/c/f=92/53/0 sis=93) [0] r=0 lpr=93 pi=[52,93)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:06 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:06 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:06 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:06.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:07 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:07 compute-1 ceph-mon[79846]: 9.13 scrub starts
Dec 08 09:50:07 compute-1 ceph-mon[79846]: 9.13 scrub ok
Dec 08 09:50:07 compute-1 ceph-mon[79846]: osdmap e93: 3 total, 3 up, 3 in
Dec 08 09:50:07 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:07 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:07 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Dec 08 09:50:07 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 94 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=93/94 n=5 ec=52/36 lis/c=91/52 les/c/f=92/53/0 sis=93) [0] r=0 lpr=93 pi=[52,93)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:07 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:07 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:07 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:08 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:08 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:08 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:08.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:08 compute-1 ceph-mon[79846]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 08 09:50:08 compute-1 ceph-mon[79846]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 08 09:50:08 compute-1 ceph-mon[79846]: 9.19 scrub starts
Dec 08 09:50:08 compute-1 ceph-mon[79846]: pgmap v29: 353 pgs: 1 remapped+peering, 2 active+remapped, 350 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:08 compute-1 ceph-mon[79846]: 9.19 scrub ok
Dec 08 09:50:08 compute-1 ceph-mon[79846]: osdmap e94: 3 total, 3 up, 3 in
Dec 08 09:50:08 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:08 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:08 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:08 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:08 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:08.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:09 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:09 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Dec 08 09:50:09 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 95 pg[9.12( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=95) [0] r=0 lpr=95 pi=[52,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:09 compute-1 ceph-mon[79846]: 9.18 scrub starts
Dec 08 09:50:09 compute-1 ceph-mon[79846]: 9.18 scrub ok
Dec 08 09:50:09 compute-1 ceph-mon[79846]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 08 09:50:09 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 08 09:50:09 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:50:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:09 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:09 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:09 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:10 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Dec 08 09:50:10 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 96 pg[9.12( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=96) [0]/[1] r=-1 lpr=96 pi=[52,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:10 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 96 pg[9.12( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=96) [0]/[1] r=-1 lpr=96 pi=[52,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:10 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:10 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:10 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:10.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:10 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:10 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 08 09:50:10 compute-1 ceph-mon[79846]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec 08 09:50:10 compute-1 ceph-mon[79846]: pgmap v31: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:10 compute-1 ceph-mon[79846]: 9.1b scrub starts
Dec 08 09:50:10 compute-1 ceph-mon[79846]: 9.1b scrub ok
Dec 08 09:50:10 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 08 09:50:10 compute-1 ceph-mon[79846]: osdmap e95: 3 total, 3 up, 3 in
Dec 08 09:50:10 compute-1 ceph-mon[79846]: osdmap e96: 3 total, 3 up, 3 in
Dec 08 09:50:10 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 08 09:50:10 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 08 09:50:10 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:10 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:10 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:10.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:10 compute-1 sudo[87883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:50:10 compute-1 sudo[87883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:10 compute-1 sudo[87883]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:10 compute-1 sudo[87908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:50:10 compute-1 sudo[87908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:11 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Dec 08 09:50:11 compute-1 podman[87949]: 2025-12-08 09:50:11.377488878 +0000 UTC m=+0.061663332 container create 122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 08 09:50:11 compute-1 systemd[1]: Started libpod-conmon-122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7.scope.
Dec 08 09:50:11 compute-1 podman[87949]: 2025-12-08 09:50:11.342069779 +0000 UTC m=+0.026244253 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:50:11 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:50:11 compute-1 podman[87949]: 2025-12-08 09:50:11.509409113 +0000 UTC m=+0.193583577 container init 122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 08 09:50:11 compute-1 podman[87949]: 2025-12-08 09:50:11.520852426 +0000 UTC m=+0.205026890 container start 122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 08 09:50:11 compute-1 podman[87949]: 2025-12-08 09:50:11.52476793 +0000 UTC m=+0.208942394 container attach 122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 08 09:50:11 compute-1 interesting_hellman[87966]: 167 167
Dec 08 09:50:11 compute-1 systemd[1]: libpod-122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7.scope: Deactivated successfully.
Dec 08 09:50:11 compute-1 podman[87949]: 2025-12-08 09:50:11.530708413 +0000 UTC m=+0.214882867 container died 122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 08 09:50:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:11 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-ea079645c956ec7df51839af8a8733a8ae7bf8ce02d18b56a46a37bb11f5065d-merged.mount: Deactivated successfully.
Dec 08 09:50:11 compute-1 systemd[83266]: Starting Mark boot as successful...
Dec 08 09:50:11 compute-1 systemd[83266]: Finished Mark boot as successful.
Dec 08 09:50:11 compute-1 podman[87949]: 2025-12-08 09:50:11.579614215 +0000 UTC m=+0.263788679 container remove 122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_hellman, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 08 09:50:11 compute-1 systemd[1]: libpod-conmon-122f6ea59cbc32ca04f649f8c8e06dca24a1a65e345ee2a7b43bf1d393a69fb7.scope: Deactivated successfully.
Dec 08 09:50:11 compute-1 sudo[87908]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:11 compute-1 sudo[87985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:50:11 compute-1 sudo[87985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:11 compute-1 sudo[87985]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:11 compute-1 ceph-mon[79846]: 9.7 scrub starts
Dec 08 09:50:11 compute-1 ceph-mon[79846]: 9.7 scrub ok
Dec 08 09:50:11 compute-1 ceph-mon[79846]: 9.1 scrub starts
Dec 08 09:50:11 compute-1 ceph-mon[79846]: 9.1 scrub ok
Dec 08 09:50:11 compute-1 ceph-mon[79846]: 11.1a scrub starts
Dec 08 09:50:11 compute-1 ceph-mon[79846]: 11.1a scrub ok
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:11 compute-1 ceph-mon[79846]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:11 compute-1 ceph-mon[79846]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 08 09:50:11 compute-1 ceph-mon[79846]: 9.b scrub starts
Dec 08 09:50:11 compute-1 ceph-mon[79846]: pgmap v34: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 08 09:50:11 compute-1 ceph-mon[79846]: osdmap e97: 3 total, 3 up, 3 in
Dec 08 09:50:11 compute-1 ceph-mon[79846]: 9.b scrub ok
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:11 compute-1 ceph-mon[79846]: Reconfiguring osd.0 (monmap changed)...
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 08 09:50:11 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:11 compute-1 ceph-mon[79846]: Reconfiguring daemon osd.0 on compute-1
Dec 08 09:50:11 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 08 09:50:11 compute-1 sudo[88010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:50:11 compute-1 sudo[88010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:11 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 08 09:50:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:11 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:11 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:11 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:12 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Dec 08 09:50:12 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 98 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=96/52 les/c/f=97/53/0 sis=98) [0] r=0 lpr=98 pi=[52,98)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:12 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 98 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=6 ec=52/36 lis/c=96/52 les/c/f=97/53/0 sis=98) [0] r=0 lpr=98 pi=[52,98)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:12 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:12 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:12 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:12.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:12 compute-1 podman[88051]: 2025-12-08 09:50:12.167459764 +0000 UTC m=+0.055056771 container create 3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lumiere, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 08 09:50:12 compute-1 systemd[1]: Started libpod-conmon-3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf.scope.
Dec 08 09:50:12 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:50:12 compute-1 podman[88051]: 2025-12-08 09:50:12.146353721 +0000 UTC m=+0.033950738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:50:12 compute-1 podman[88051]: 2025-12-08 09:50:12.253112505 +0000 UTC m=+0.140709482 container init 3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lumiere, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 08 09:50:12 compute-1 podman[88051]: 2025-12-08 09:50:12.265033901 +0000 UTC m=+0.152630898 container start 3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Dec 08 09:50:12 compute-1 priceless_lumiere[88067]: 167 167
Dec 08 09:50:12 compute-1 systemd[1]: libpod-3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf.scope: Deactivated successfully.
Dec 08 09:50:12 compute-1 podman[88051]: 2025-12-08 09:50:12.270367106 +0000 UTC m=+0.157964073 container attach 3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lumiere, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 08 09:50:12 compute-1 podman[88051]: 2025-12-08 09:50:12.271784507 +0000 UTC m=+0.159381484 container died 3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 08 09:50:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-5fefdad98b527365b7011647721ab0c70bd35ccc11eda3a602a41f292b3ee419-merged.mount: Deactivated successfully.
Dec 08 09:50:12 compute-1 podman[88051]: 2025-12-08 09:50:12.306062914 +0000 UTC m=+0.193659881 container remove 3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 08 09:50:12 compute-1 systemd[1]: libpod-conmon-3f4f23b98611f764fa60bbaa0fae2a302e66646682ead2a617411ad321478cbf.scope: Deactivated successfully.
Dec 08 09:50:12 compute-1 sudo[88010]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:12 compute-1 sudo[88090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 08 09:50:12 compute-1 sudo[88090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:12 compute-1 sudo[88090]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:12 compute-1 sudo[88115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec 08 09:50:12 compute-1 sudo[88115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:12 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 08 09:50:12 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 08 09:50:12 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:12 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 08 09:50:12 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:12.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 08 09:50:13 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Dec 08 09:50:13 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 99 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=98/99 n=6 ec=52/36 lis/c=96/52 les/c/f=97/53/0 sis=98) [0] r=0 lpr=98 pi=[52,98)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:13 compute-1 ceph-mon[79846]: 12.10 deep-scrub starts
Dec 08 09:50:13 compute-1 ceph-mon[79846]: 12.10 deep-scrub ok
Dec 08 09:50:13 compute-1 ceph-mon[79846]: 11.4 scrub starts
Dec 08 09:50:13 compute-1 ceph-mon[79846]: 11.4 scrub ok
Dec 08 09:50:13 compute-1 ceph-mon[79846]: 9.5 scrub starts
Dec 08 09:50:13 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 08 09:50:13 compute-1 ceph-mon[79846]: osdmap e98: 3 total, 3 up, 3 in
Dec 08 09:50:13 compute-1 ceph-mon[79846]: 9.5 scrub ok
Dec 08 09:50:13 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:13 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:13 compute-1 ceph-mon[79846]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 08 09:50:13 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 08 09:50:13 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 08 09:50:13 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:13 compute-1 ceph-mon[79846]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 08 09:50:13 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 08 09:50:13 compute-1 podman[88157]: 2025-12-08 09:50:13.103793466 +0000 UTC m=+0.056425982 container create ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_robinson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 08 09:50:13 compute-1 systemd[1]: Started libpod-conmon-ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb.scope.
Dec 08 09:50:13 compute-1 podman[88157]: 2025-12-08 09:50:13.08502326 +0000 UTC m=+0.037655816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 08 09:50:13 compute-1 systemd[1]: Started libcrun container.
Dec 08 09:50:13 compute-1 podman[88157]: 2025-12-08 09:50:13.198675044 +0000 UTC m=+0.151307580 container init ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 08 09:50:13 compute-1 podman[88157]: 2025-12-08 09:50:13.205895384 +0000 UTC m=+0.158527910 container start ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 08 09:50:13 compute-1 podman[88157]: 2025-12-08 09:50:13.209186169 +0000 UTC m=+0.161818685 container attach ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 08 09:50:13 compute-1 admiring_robinson[88173]: 167 167
Dec 08 09:50:13 compute-1 systemd[1]: libpod-ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb.scope: Deactivated successfully.
Dec 08 09:50:13 compute-1 podman[88157]: 2025-12-08 09:50:13.211072444 +0000 UTC m=+0.163705000 container died ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 08 09:50:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-785cb20eb68bafbf352e67f30388280bf305d8a879a6c6db31f1af9a4ef59453-merged.mount: Deactivated successfully.
Dec 08 09:50:13 compute-1 podman[88157]: 2025-12-08 09:50:13.247140503 +0000 UTC m=+0.199773029 container remove ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_robinson, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 08 09:50:13 compute-1 systemd[1]: libpod-conmon-ae922592dd2972c00dfd6d5f1e8347a60bcdaf45f2bba47b731a2cab9aae89cb.scope: Deactivated successfully.
Dec 08 09:50:13 compute-1 sudo[88115]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:13 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 08 09:50:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:13 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 08 09:50:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:13 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:13 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 08 09:50:13 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 08 09:50:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:13 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:13 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:13 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:14 compute-1 ceph-mon[79846]: 10.13 scrub starts
Dec 08 09:50:14 compute-1 ceph-mon[79846]: 10.13 scrub ok
Dec 08 09:50:14 compute-1 ceph-mon[79846]: 8.19 scrub starts
Dec 08 09:50:14 compute-1 ceph-mon[79846]: 8.19 scrub ok
Dec 08 09:50:14 compute-1 ceph-mon[79846]: 9.8 deep-scrub starts
Dec 08 09:50:14 compute-1 ceph-mon[79846]: pgmap v37: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:14 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 08 09:50:14 compute-1 ceph-mon[79846]: 9.8 deep-scrub ok
Dec 08 09:50:14 compute-1 ceph-mon[79846]: osdmap e99: 3 total, 3 up, 3 in
Dec 08 09:50:14 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:14 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:14 compute-1 ceph-mon[79846]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 08 09:50:14 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 08 09:50:14 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 08 09:50:14 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:14 compute-1 ceph-mon[79846]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 08 09:50:14 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:14 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:14 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:14.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:14 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 08 09:50:14 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 08 09:50:14 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:50:14 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:14 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:14 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:14.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:15 compute-1 ceph-mon[79846]: 10.19 scrub starts
Dec 08 09:50:15 compute-1 ceph-mon[79846]: 10.19 scrub ok
Dec 08 09:50:15 compute-1 ceph-mon[79846]: 9.f scrub starts
Dec 08 09:50:15 compute-1 ceph-mon[79846]: 9.f scrub ok
Dec 08 09:50:15 compute-1 ceph-mon[79846]: 9.17 deep-scrub starts
Dec 08 09:50:15 compute-1 ceph-mon[79846]: 9.17 deep-scrub ok
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:15 compute-1 ceph-mon[79846]: Reconfiguring mgr.compute-2.zqytsv (monmap changed)...
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:15 compute-1 ceph-mon[79846]: Reconfiguring daemon mgr.compute-2.zqytsv on compute-2
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 08 09:50:15 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:15 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:15 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:15 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Dec 08 09:50:15 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Dec 08 09:50:15 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:15 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:15 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:15 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:16 compute-1 ceph-mon[79846]: 10.2 scrub starts
Dec 08 09:50:16 compute-1 ceph-mon[79846]: 10.2 scrub ok
Dec 08 09:50:16 compute-1 ceph-mon[79846]: 9.6 scrub starts
Dec 08 09:50:16 compute-1 ceph-mon[79846]: 9.6 scrub ok
Dec 08 09:50:16 compute-1 ceph-mon[79846]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 08 09:50:16 compute-1 ceph-mon[79846]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 08 09:50:16 compute-1 ceph-mon[79846]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 08 09:50:16 compute-1 ceph-mon[79846]: pgmap v39: 353 pgs: 1 peering, 1 active+clean+scrubbing+deep, 351 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:16 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:16 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:16 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:16.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:16 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:16 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 08 09:50:16 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 08 09:50:16 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 08 09:50:16 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:16 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:16 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:16.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:17 compute-1 ceph-mon[79846]: 10.15 scrub starts
Dec 08 09:50:17 compute-1 ceph-mon[79846]: 10.15 scrub ok
Dec 08 09:50:17 compute-1 ceph-mon[79846]: 9.1a deep-scrub starts
Dec 08 09:50:17 compute-1 ceph-mon[79846]: 9.1a deep-scrub ok
Dec 08 09:50:17 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:17 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:17 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 08 09:50:17 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 08 09:50:17 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:17 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:17 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:17 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e78003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:18 compute-1 ceph-mon[79846]: 10.8 scrub starts
Dec 08 09:50:18 compute-1 ceph-mon[79846]: 10.8 scrub ok
Dec 08 09:50:18 compute-1 ceph-mon[79846]: 9.1f scrub starts
Dec 08 09:50:18 compute-1 ceph-mon[79846]: 9.1f scrub ok
Dec 08 09:50:18 compute-1 ceph-mon[79846]: pgmap v40: 353 pgs: 1 peering, 1 active+clean+scrubbing+deep, 351 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:18 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 08 09:50:18 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:18 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:18 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:18.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:18 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 08 09:50:18 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 08 09:50:18 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:18 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:18 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:18.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:19 compute-1 ceph-mon[79846]: 10.14 scrub starts
Dec 08 09:50:19 compute-1 ceph-mon[79846]: 10.14 scrub ok
Dec 08 09:50:19 compute-1 ceph-mon[79846]: 9.1e scrub starts
Dec 08 09:50:19 compute-1 ceph-mon[79846]: 9.1e scrub ok
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 08 09:50:19 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 08 09:50:19 compute-1 sshd-session[87881]: error: kex_exchange_identification: read: Connection timed out
Dec 08 09:50:19 compute-1 sshd-session[87881]: banner exchange: Connection from 120.48.123.76 port 45160: Connection timed out
Dec 08 09:50:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:19 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:19 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Dec 08 09:50:19 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=100) [0] r=0 lpr=100 pi=[68,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:19 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 08 09:50:19 compute-1 ceph-osd[77531]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 08 09:50:19 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:50:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:19 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:19 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:19 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:20 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Dec 08 09:50:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[68,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:20 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=68/68 les/c/f=69/69/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[68,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:20 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:20 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000030s ======
Dec 08 09:50:20 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:20.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 08 09:50:20 compute-1 ceph-mon[79846]: 9.0 deep-scrub starts
Dec 08 09:50:20 compute-1 ceph-mon[79846]: 9.0 deep-scrub ok
Dec 08 09:50:20 compute-1 ceph-mon[79846]: 9.1d scrub starts
Dec 08 09:50:20 compute-1 ceph-mon[79846]: 9.1d scrub ok
Dec 08 09:50:20 compute-1 ceph-mon[79846]: pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:20 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 08 09:50:20 compute-1 ceph-mon[79846]: osdmap e100: 3 total, 3 up, 3 in
Dec 08 09:50:20 compute-1 ceph-mon[79846]: osdmap e101: 3 total, 3 up, 3 in
Dec 08 09:50:20 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:20 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:20 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:20.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:21 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Dec 08 09:50:21 compute-1 ceph-mon[79846]: 9.4 scrub starts
Dec 08 09:50:21 compute-1 ceph-mon[79846]: 9.4 scrub ok
Dec 08 09:50:21 compute-1 ceph-mon[79846]: 9.12 scrub starts
Dec 08 09:50:21 compute-1 ceph-mon[79846]: 9.12 scrub ok
Dec 08 09:50:21 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 08 09:50:21 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 08 09:50:21 compute-1 ceph-mon[79846]: osdmap e102: 3 total, 3 up, 3 in
Dec 08 09:50:21 compute-1 sudo[88193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 08 09:50:21 compute-1 sudo[88193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:21 compute-1 sudo[88193]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:21 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:21 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:21 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 102 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=70/71 n=4 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102 pruub=12.184019089s) [2] r=-1 lpr=102 pi=[70,102)/1 crt=49'1026 mlcod 0'0 active pruub 242.795410156s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:21 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 102 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=70/71 n=4 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102 pruub=12.183804512s) [2] r=-1 lpr=102 pi=[70,102)/1 crt=49'1026 mlcod 0'0 unknown NOTIFY pruub 242.795410156s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:21 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:21 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:21 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:21 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:22 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:22 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:22 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:22.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:22 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Dec 08 09:50:22 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 103 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=70/71 n=4 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=103) [2]/[0] r=0 lpr=103 pi=[70,103)/1 crt=49'1026 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:22 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 103 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=4 ec=52/36 lis/c=101/68 les/c/f=102/69/0 sis=103) [0] r=0 lpr=103 pi=[68,103)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:22 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 103 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=70/71 n=4 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=103) [2]/[0] r=0 lpr=103 pi=[70,103)/1 crt=49'1026 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:22 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 103 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=4 ec=52/36 lis/c=101/68 les/c/f=102/69/0 sis=103) [0] r=0 lpr=103 pi=[68,103)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:22 compute-1 ceph-mon[79846]: 9.1c scrub starts
Dec 08 09:50:22 compute-1 ceph-mon[79846]: 9.1c scrub ok
Dec 08 09:50:22 compute-1 ceph-mon[79846]: pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:22 compute-1 sshd[1006]: Timeout before authentication for connection from 58.221.60.25 to 38.102.83.181, pid = 84773
Dec 08 09:50:22 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw[85819]: [WARNING] 341/095022 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 08 09:50:22 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:22 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:22 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:22.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:23 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Dec 08 09:50:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 104 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=103/104 n=4 ec=52/36 lis/c=101/68 les/c/f=102/69/0 sis=103) [0] r=0 lpr=103 pi=[68,103)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:23 compute-1 ceph-mon[79846]: osdmap e103: 3 total, 3 up, 3 in
Dec 08 09:50:23 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 08 09:50:23 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 08 09:50:23 compute-1 ceph-mon[79846]: osdmap e104: 3 total, 3 up, 3 in
Dec 08 09:50:23 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 104 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=103/104 n=4 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=103) [2]/[0] async=[2] r=0 lpr=103 pi=[70,103)/1 crt=49'1026 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:23 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:23 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:23 compute-1 sudo[88220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 08 09:50:23 compute-1 sudo[88220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:23 compute-1 sudo[88220]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:23 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:23 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e58004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:23 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:23 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:24 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:24 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:24 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:24.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:24 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Dec 08 09:50:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 105 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=103/104 n=4 ec=52/36 lis/c=103/70 les/c/f=104/71/0 sis=105 pruub=14.971553802s) [2] async=[2] r=-1 lpr=105 pi=[70,105)/1 crt=49'1026 mlcod 49'1026 active pruub 248.048019409s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:24 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 105 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=103/104 n=4 ec=52/36 lis/c=103/70 les/c/f=104/71/0 sis=105 pruub=14.971313477s) [2] r=-1 lpr=105 pi=[70,105)/1 crt=49'1026 mlcod 0'0 unknown NOTIFY pruub 248.048019409s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:24 compute-1 ceph-mon[79846]: pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:24 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:24 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec 08 09:50:24 compute-1 ceph-mon[79846]: osdmap e105: 3 total, 3 up, 3 in
Dec 08 09:50:24 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:50:24 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:24 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:24 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:24.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:25 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Dec 08 09:50:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:25 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:25 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:25 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:25 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:26 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:26 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:26 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:26 compute-1 ceph-mon[79846]: pgmap v50: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:26 compute-1 ceph-mon[79846]: osdmap e106: 3 total, 3 up, 3 in
Dec 08 09:50:26 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:26 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:26 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:26.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:27 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:27 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:27 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:27 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:28 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:28 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:28 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:28.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:28 compute-1 ceph-mon[79846]: pgmap v52: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:28 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:28 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:28 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:28.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:29 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 08 09:50:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Dec 08 09:50:29 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:29 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:29 compute-1 sshd-session[88250]: Accepted publickey for zuul from 192.168.122.10 port 54266 ssh2: ECDSA SHA256:OYiJ0qK9HlckOLsAMneS1eCh6uM9OgfWStqM+CYb3U8
Dec 08 09:50:29 compute-1 systemd-logind[795]: New session 37 of user zuul.
Dec 08 09:50:29 compute-1 systemd[1]: Started Session 37 of User zuul.
Dec 08 09:50:29 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:50:29 compute-1 sshd-session[88250]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 08 09:50:29 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:29 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:29 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:29 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:29 compute-1 sudo[88254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 08 09:50:29 compute-1 sudo[88254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 08 09:50:30 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:30 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:30 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:30.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:30 compute-1 ceph-mon[79846]: pgmap v53: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 08 09:50:30 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 08 09:50:30 compute-1 ceph-mon[79846]: osdmap e107: 3 total, 3 up, 3 in
Dec 08 09:50:30 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:30 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:30 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:30.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:31 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 08 09:50:31 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Dec 08 09:50:31 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:31 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:31 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:31 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:31 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:31 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:32 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:32 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:32 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:32.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:32 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Dec 08 09:50:32 compute-1 ceph-mon[79846]: pgmap v55: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 302 B/s rd, 0 op/s; 16 B/s, 0 objects/s recovering
Dec 08 09:50:32 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 08 09:50:32 compute-1 ceph-mon[79846]: osdmap e108: 3 total, 3 up, 3 in
Dec 08 09:50:32 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 08 09:50:32 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:32 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:32 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:32.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:33 compute-1 ceph-mon[79846]: osdmap e109: 3 total, 3 up, 3 in
Dec 08 09:50:33 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 08 09:50:33 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Dec 08 09:50:33 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 110 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=81/82 n=4 ec=52/36 lis/c=81/81 les/c/f=82/82/0 sis=110 pruub=8.743705750s) [1] r=-1 lpr=110 pi=[81,110)/1 crt=49'1026 mlcod 0'0 active pruub 251.031829834s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:33 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 110 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=81/82 n=4 ec=52/36 lis/c=81/81 les/c/f=82/82/0 sis=110 pruub=8.742689133s) [1] r=-1 lpr=110 pi=[81,110)/1 crt=49'1026 mlcod 0'0 unknown NOTIFY pruub 251.031829834s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:33 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:33 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:33 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:33 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:33 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:33 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:34 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:34 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:34 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:34.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:34 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Dec 08 09:50:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 111 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=81/82 n=4 ec=52/36 lis/c=81/81 les/c/f=82/82/0 sis=111) [1]/[0] r=0 lpr=111 pi=[81,111)/1 crt=49'1026 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:34 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 111 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=81/82 n=4 ec=52/36 lis/c=81/81 les/c/f=82/82/0 sis=111) [1]/[0] r=0 lpr=111 pi=[81,111)/1 crt=49'1026 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 08 09:50:34 compute-1 ceph-mon[79846]: pgmap v58: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 08 09:50:34 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 08 09:50:34 compute-1 ceph-mon[79846]: osdmap e110: 3 total, 3 up, 3 in
Dec 08 09:50:34 compute-1 ovs-vsctl[88453]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 08 09:50:34 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:50:34 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:34 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 08 09:50:34 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:34.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 08 09:50:35 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Dec 08 09:50:35 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 112 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=111/112 n=4 ec=52/36 lis/c=81/81 les/c/f=82/82/0 sis=111) [1]/[0] async=[1] r=0 lpr=111 pi=[81,111)/1 crt=49'1026 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 08 09:50:35 compute-1 ceph-mon[79846]: osdmap e111: 3 total, 3 up, 3 in
Dec 08 09:50:35 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:35 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:35 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:35 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:35 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:35 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:36 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:36 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:36 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:36 compute-1 lvm[88793]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 08 09:50:36 compute-1 lvm[88793]: VG ceph_vg0 finished
Dec 08 09:50:36 compute-1 ceph-mon[79846]: pgmap v61: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 08 09:50:36 compute-1 ceph-mon[79846]: osdmap e112: 3 total, 3 up, 3 in
Dec 08 09:50:36 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Dec 08 09:50:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 113 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=111/112 n=4 ec=52/36 lis/c=111/81 les/c/f=112/82/0 sis=113 pruub=14.901542664s) [1] async=[1] r=-1 lpr=113 pi=[81,113)/1 crt=49'1026 mlcod 49'1026 active pruub 260.347503662s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 08 09:50:36 compute-1 ceph-osd[77531]: osd.0 pg_epoch: 113 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=111/112 n=4 ec=52/36 lis/c=111/81 les/c/f=112/82/0 sis=113 pruub=14.901350975s) [1] r=-1 lpr=113 pi=[81,113)/1 crt=49'1026 mlcod 0'0 unknown NOTIFY pruub 260.347503662s@ mbc={}] state<Start>: transitioning to Stray
Dec 08 09:50:36 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:36 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:36 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:36.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:37 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Dec 08 09:50:37 compute-1 ceph-mon[79846]: osdmap e113: 3 total, 3 up, 3 in
Dec 08 09:50:37 compute-1 crontab[89213]: (root) LIST (root)
Dec 08 09:50:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:37 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:37 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:38 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:38 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:38 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:38.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:38 compute-1 ceph-mon[79846]: pgmap v64: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 08 09:50:38 compute-1 ceph-mon[79846]: osdmap e114: 3 total, 3 up, 3 in
Dec 08 09:50:38 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:38 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:38 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:38.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:39 compute-1 ceph-mon[79846]: pgmap v66: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 449 B/s rd, 0 op/s; 48 B/s, 2 objects/s recovering
Dec 08 09:50:39 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 08 09:50:39 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Dec 08 09:50:39 compute-1 sshd-session[89293]: Received disconnect from 79.32.212.213 port 54820:11: Bye Bye [preauth]
Dec 08 09:50:39 compute-1 sshd-session[89293]: Disconnected from authenticating user root 79.32.212.213 port 54820 [preauth]
Dec 08 09:50:39 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 08 09:50:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:39 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:39 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e7c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:40 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Dec 08 09:50:40 compute-1 systemd[1]: Starting Hostname Service...
Dec 08 09:50:40 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:40 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:40 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:40.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:40 compute-1 systemd[1]: Started Hostname Service.
Dec 08 09:50:40 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 08 09:50:40 compute-1 ceph-mon[79846]: osdmap e115: 3 total, 3 up, 3 in
Dec 08 09:50:40 compute-1 ceph-mon[79846]: osdmap e116: 3 total, 3 up, 3 in
Dec 08 09:50:40 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-1-opvoqw[85819]: [WARNING] 341/095040 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 08 09:50:40 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:40 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:40 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:40.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 08 09:50:41 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Dec 08 09:50:41 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:41 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e780045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:41 compute-1 sudo[89466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 08 09:50:41 compute-1 sudo[89466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 08 09:50:41 compute-1 sudo[89466]: pam_unix(sudo:session): session closed for user root
Dec 08 09:50:41 compute-1 ceph-mon[79846]: pgmap v69: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 463 B/s rd, 0 op/s; 49 B/s, 2 objects/s recovering
Dec 08 09:50:41 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 08 09:50:41 compute-1 ceph-mon[79846]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 08 09:50:41 compute-1 ceph-mon[79846]: osdmap e117: 3 total, 3 up, 3 in
Dec 08 09:50:41 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:41 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e60003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:41 compute-1 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-0-0-compute-1-drrxym[85397]: 08/12/2025 09:50:41 : epoch 69369ef2 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6e8800a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 08 09:50:42 compute-1 ceph-mon[79846]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Dec 08 09:50:42 compute-1 radosgw[81249]: ====== starting new request req=0x7faefea325d0 =====
Dec 08 09:50:42 compute-1 radosgw[81249]: ====== req done req=0x7faefea325d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 08 09:50:42 compute-1 radosgw[81249]: beast: 0x7faefea325d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:42.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
